00:00:00.001 Started by upstream project "autotest-per-patch" build number 127076 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.142 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.180 Using shallow fetch with depth 1 00:00:00.180 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.180 > git --version # timeout=10 00:00:00.211 > git --version # 'git version 2.39.2' 00:00:00.211 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.995 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.005 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.017 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:07.017 > git config core.sparsecheckout # timeout=10 00:00:07.027 > git read-tree -mu HEAD # timeout=10 00:00:07.042 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:07.074 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:07.074 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:07.161 [Pipeline] Start of Pipeline 00:00:07.175 [Pipeline] library 00:00:07.176 Loading library shm_lib@master 00:00:07.177 Library shm_lib@master is cached. Copying from home. 00:00:07.195 [Pipeline] node 00:00:07.203 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.205 [Pipeline] { 00:00:07.215 [Pipeline] catchError 00:00:07.216 [Pipeline] { 00:00:07.227 [Pipeline] wrap 00:00:07.235 [Pipeline] { 00:00:07.242 [Pipeline] stage 00:00:07.244 [Pipeline] { (Prologue) 00:00:07.446 [Pipeline] sh 00:00:07.725 + logger -p user.info -t JENKINS-CI 00:00:07.741 [Pipeline] echo 00:00:07.742 Node: GP6 00:00:07.750 [Pipeline] sh 00:00:08.042 [Pipeline] setCustomBuildProperty 00:00:08.056 [Pipeline] echo 00:00:08.058 Cleanup processes 00:00:08.064 [Pipeline] sh 00:00:08.342 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.342 2571481 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.354 [Pipeline] sh 00:00:08.632 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.632 ++ grep -v 'sudo pgrep' 00:00:08.632 ++ awk '{print $1}' 00:00:08.632 + sudo kill -9 00:00:08.632 + true 00:00:08.647 [Pipeline] cleanWs 00:00:08.656 [WS-CLEANUP] Deleting project workspace... 00:00:08.656 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.661 [WS-CLEANUP] done 00:00:08.665 [Pipeline] setCustomBuildProperty 00:00:08.680 [Pipeline] sh 00:00:08.955 + sudo git config --global --replace-all safe.directory '*' 00:00:09.044 [Pipeline] httpRequest 00:00:09.082 [Pipeline] echo 00:00:09.084 Sorcerer 10.211.164.101 is alive 00:00:09.093 [Pipeline] httpRequest 00:00:09.098 HttpMethod: GET 00:00:09.098 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.099 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.118 Response Code: HTTP/1.1 200 OK 00:00:09.118 Success: Status code 200 is in the accepted range: 200,404 00:00:09.119 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:14.615 [Pipeline] sh 00:00:14.895 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:14.915 [Pipeline] httpRequest 00:00:14.942 [Pipeline] echo 00:00:14.944 Sorcerer 10.211.164.101 is alive 00:00:14.954 [Pipeline] httpRequest 00:00:14.959 HttpMethod: GET 00:00:14.959 URL: http://10.211.164.101/packages/spdk_5c0b15eedb66a29a06bb17f5a0deff81aa83c43d.tar.gz 00:00:14.960 Sending request to url: http://10.211.164.101/packages/spdk_5c0b15eedb66a29a06bb17f5a0deff81aa83c43d.tar.gz 00:00:14.970 Response Code: HTTP/1.1 200 OK 00:00:14.971 Success: Status code 200 is in the accepted range: 200,404 00:00:14.971 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5c0b15eedb66a29a06bb17f5a0deff81aa83c43d.tar.gz 00:01:07.450 [Pipeline] sh 00:01:07.734 + tar --no-same-owner -xf spdk_5c0b15eedb66a29a06bb17f5a0deff81aa83c43d.tar.gz 00:01:11.031 [Pipeline] sh 00:01:11.311 + git -C spdk log --oneline -n5 00:01:11.311 5c0b15eed nvmf/tcp: fix spdk_nvmf_tcp_control_msg_list queuing 00:01:11.311 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:11.311 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:11.311 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:11.311 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:01:11.324 [Pipeline] } 00:01:11.341 [Pipeline] // stage 00:01:11.350 [Pipeline] stage 00:01:11.352 [Pipeline] { (Prepare) 00:01:11.368 [Pipeline] writeFile 00:01:11.386 [Pipeline] sh 00:01:11.669 + logger -p user.info -t JENKINS-CI 00:01:11.681 [Pipeline] sh 00:01:11.961 + logger -p user.info -t JENKINS-CI 00:01:11.973 [Pipeline] sh 00:01:12.255 + cat autorun-spdk.conf 00:01:12.255 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.255 SPDK_TEST_NVMF=1 00:01:12.255 SPDK_TEST_NVME_CLI=1 00:01:12.255 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.255 SPDK_TEST_NVMF_NICS=e810 00:01:12.255 SPDK_TEST_VFIOUSER=1 00:01:12.255 SPDK_RUN_UBSAN=1 00:01:12.255 NET_TYPE=phy 00:01:12.263 RUN_NIGHTLY=0 00:01:12.268 [Pipeline] readFile 00:01:12.294 [Pipeline] withEnv 00:01:12.296 [Pipeline] { 00:01:12.310 [Pipeline] sh 00:01:12.596 + set -ex 00:01:12.596 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.596 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.596 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.596 ++ SPDK_TEST_NVMF=1 00:01:12.596 ++ SPDK_TEST_NVME_CLI=1 00:01:12.596 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.596 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.596 ++ SPDK_TEST_VFIOUSER=1 00:01:12.596 ++ SPDK_RUN_UBSAN=1 00:01:12.596 ++ NET_TYPE=phy 00:01:12.597 ++ RUN_NIGHTLY=0 00:01:12.597 + case $SPDK_TEST_NVMF_NICS in 00:01:12.597 + DRIVERS=ice 00:01:12.597 + [[ tcp == \r\d\m\a ]] 00:01:12.597 + [[ -n ice ]] 00:01:12.597 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.597 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.597 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.597 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.597 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.597 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.597 + true 00:01:12.597 + for D in $DRIVERS 00:01:12.597 + sudo modprobe ice 00:01:12.597 + exit 0 00:01:12.607 [Pipeline] } 00:01:12.626 [Pipeline] // withEnv 00:01:12.631 [Pipeline] } 00:01:12.646 [Pipeline] // stage 00:01:12.656 [Pipeline] catchError 00:01:12.658 [Pipeline] { 00:01:12.674 [Pipeline] timeout 00:01:12.675 Timeout set to expire in 50 min 00:01:12.677 [Pipeline] { 00:01:12.693 [Pipeline] stage 00:01:12.694 [Pipeline] { (Tests) 00:01:12.710 [Pipeline] sh 00:01:12.993 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.993 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.993 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.993 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.993 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.993 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.993 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.993 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.993 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.993 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.993 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.993 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.993 + source /etc/os-release 00:01:12.993 ++ NAME='Fedora Linux' 00:01:12.993 ++ VERSION='38 (Cloud Edition)' 00:01:12.993 ++ ID=fedora 00:01:12.993 ++ VERSION_ID=38 00:01:12.993 ++ VERSION_CODENAME= 00:01:12.993 ++ PLATFORM_ID=platform:f38 00:01:12.993 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.993 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.993 ++ LOGO=fedora-logo-icon 00:01:12.993 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.993 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.993 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.993 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.993 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.993 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.993 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.993 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.993 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.993 ++ SUPPORT_END=2024-05-14 00:01:12.993 ++ VARIANT='Cloud Edition' 00:01:12.993 ++ VARIANT_ID=cloud 00:01:12.993 + uname -a 00:01:12.993 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.993 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.929 Hugepages 00:01:13.929 node hugesize free / total 00:01:13.929 node0 1048576kB 0 / 0 00:01:13.929 node0 2048kB 0 / 0 00:01:13.929 node1 1048576kB 0 / 0 00:01:13.929 node1 2048kB 0 / 0 00:01:13.929 00:01:13.929 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.929 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:13.929 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:14.187 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:14.187 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:14.187 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:14.187 + rm -f /tmp/spdk-ld-path 00:01:14.187 + source autorun-spdk.conf 00:01:14.187 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.187 ++ SPDK_TEST_NVMF=1 00:01:14.187 ++ SPDK_TEST_NVME_CLI=1 00:01:14.187 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.187 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.187 ++ SPDK_TEST_VFIOUSER=1 00:01:14.187 ++ SPDK_RUN_UBSAN=1 00:01:14.187 ++ NET_TYPE=phy 00:01:14.187 ++ RUN_NIGHTLY=0 00:01:14.187 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.187 + [[ -n '' ]] 00:01:14.187 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.187 + for M in /var/spdk/build-*-manifest.txt 00:01:14.187 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.187 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.187 + for M in /var/spdk/build-*-manifest.txt 00:01:14.187 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.187 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.187 ++ uname 00:01:14.187 + [[ Linux == \L\i\n\u\x ]] 00:01:14.187 + sudo dmesg -T 00:01:14.187 + sudo dmesg --clear 00:01:14.187 + dmesg_pid=2572153 00:01:14.187 + [[ Fedora Linux == FreeBSD ]] 00:01:14.187 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.187 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.187 + sudo dmesg -Tw 00:01:14.187 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.187 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.187 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.187 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.187 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.187 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.187 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.187 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.187 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.187 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.187 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.187 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.187 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.187 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.187 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.187 Test configuration: 00:01:14.187 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.187 SPDK_TEST_NVMF=1 00:01:14.187 SPDK_TEST_NVME_CLI=1 00:01:14.187 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.187 SPDK_TEST_NVMF_NICS=e810 00:01:14.187 SPDK_TEST_VFIOUSER=1 00:01:14.187 SPDK_RUN_UBSAN=1 00:01:14.187 NET_TYPE=phy 00:01:14.187 RUN_NIGHTLY=0 17:44:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.187 17:44:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.187 17:44:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.187 17:44:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.187 17:44:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.187 17:44:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.187 17:44:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.187 17:44:00 -- paths/export.sh@5 -- $ export PATH 00:01:14.187 17:44:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.187 17:44:00 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.187 17:44:00 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:14.187 17:44:00 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721835840.XXXXXX 00:01:14.187 17:44:00 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721835840.zs4YOo 00:01:14.187 17:44:00 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:14.187 17:44:00 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:14.187 17:44:00 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.187 17:44:00 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.187 17:44:00 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.187 17:44:00 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:14.187 17:44:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:14.187 17:44:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.187 17:44:00 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.187 17:44:00 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:14.187 17:44:00 -- pm/common@17 -- $ local monitor 00:01:14.187 17:44:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.187 17:44:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.187 17:44:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.187 17:44:00 -- pm/common@21 -- $ date +%s 00:01:14.187 17:44:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.187 17:44:00 -- pm/common@21 -- $ date +%s 00:01:14.187 17:44:00 -- pm/common@25 -- $ sleep 1 00:01:14.187 17:44:00 -- pm/common@21 -- $ date +%s 00:01:14.187 17:44:00 -- pm/common@21 -- $ date +%s 00:01:14.187 17:44:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721835840 00:01:14.187 17:44:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721835840 00:01:14.187 17:44:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721835840 00:01:14.187 17:44:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721835840 00:01:14.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721835840_collect-vmstat.pm.log 00:01:14.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721835840_collect-cpu-load.pm.log 00:01:14.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721835840_collect-cpu-temp.pm.log 00:01:14.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721835840_collect-bmc-pm.bmc.pm.log 00:01:15.565 17:44:01 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:15.565 17:44:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.565 17:44:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.565 17:44:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.565 17:44:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.565 Wed Jul 24 03:44:01 PM UTC 2024 00:01:15.565 17:44:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.565 v24.09-pre-310-g5c0b15eed 00:01:15.565 17:44:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.565 17:44:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.565 17:44:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.565 17:44:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:15.565 17:44:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.565 17:44:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.565 ************************************ 00:01:15.565 START TEST ubsan 00:01:15.565 ************************************ 00:01:15.565 17:44:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:15.565 using ubsan 00:01:15.565 00:01:15.565 real 0m0.000s 00:01:15.565 user 0m0.000s 00:01:15.565 sys 0m0.000s 00:01:15.565 17:44:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:15.565 17:44:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.565 ************************************ 00:01:15.565 END TEST ubsan 00:01:15.565 ************************************ 00:01:15.565 17:44:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.565 17:44:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.565 17:44:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.565 17:44:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.565 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.565 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.824 Using 'verbs' RDMA provider 00:01:26.402 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.386 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.386 Creating mk/config.mk...done. 00:01:36.386 Creating mk/cc.flags.mk...done. 00:01:36.386 Type 'make' to build. 00:01:36.386 17:44:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:36.386 17:44:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:36.386 17:44:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.386 17:44:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.386 ************************************ 00:01:36.386 START TEST make 00:01:36.386 ************************************ 00:01:36.386 17:44:21 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:36.386 make[1]: Nothing to be done for 'all'. 00:01:37.770 The Meson build system 00:01:37.770 Version: 1.3.1 00:01:37.770 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.770 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.770 Build type: native build 00:01:37.770 Project name: libvfio-user 00:01:37.770 Project version: 0.0.1 00:01:37.770 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.770 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.770 Host machine cpu family: x86_64 00:01:37.770 Host machine cpu: x86_64 00:01:37.770 Run-time dependency threads found: YES 00:01:37.770 Library dl found: YES 00:01:37.770 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.770 Run-time dependency json-c found: YES 0.17 00:01:37.770 Run-time dependency cmocka found: YES 1.1.7 00:01:37.770 Program pytest-3 found: NO 00:01:37.770 Program flake8 found: NO 00:01:37.770 Program misspell-fixer found: NO 00:01:37.770 Program restructuredtext-lint found: NO 00:01:37.770 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.770 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.770 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.770 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.770 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.770 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.770 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.770 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.770 Build targets in project: 8 00:01:37.770 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.770 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.770 00:01:37.770 libvfio-user 0.0.1 00:01:37.770 00:01:37.770 User defined options 00:01:37.770 buildtype : debug 00:01:37.770 default_library: shared 00:01:37.770 libdir : /usr/local/lib 00:01:37.770 00:01:37.770 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.342 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.602 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:38.602 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.602 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:38.602 [4/37] Compiling C object samples/null.p/null.c.o 00:01:38.602 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.602 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:38.602 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.602 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:38.602 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:38.602 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:38.602 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:38.602 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:38.602 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:38.602 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:38.602 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:38.602 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.602 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:38.867 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:38.867 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:38.867 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.867 [21/37] Compiling C object samples/server.p/server.c.o 00:01:38.867 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:38.867 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.867 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:38.867 [25/37] Compiling C object samples/client.p/client.c.o 00:01:38.867 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:38.867 [27/37] Linking target samples/client 00:01:38.867 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:39.131 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:39.131 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:39.131 [31/37] Linking target test/unit_tests 00:01:39.131 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:39.393 [33/37] Linking target samples/server 00:01:39.393 [34/37] Linking target samples/null 00:01:39.393 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:39.393 [36/37] Linking target samples/lspci 00:01:39.393 [37/37] Linking target samples/gpio-pci-idio-16 00:01:39.393 INFO: autodetecting backend as ninja 00:01:39.393 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.393 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.965 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.965 ninja: no work to do. 00:01:45.230 The Meson build system 00:01:45.230 Version: 1.3.1 00:01:45.230 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.230 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.230 Build type: native build 00:01:45.230 Program cat found: YES (/usr/bin/cat) 00:01:45.230 Project name: DPDK 00:01:45.230 Project version: 24.03.0 00:01:45.230 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.230 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.230 Host machine cpu family: x86_64 00:01:45.230 Host machine cpu: x86_64 00:01:45.230 Message: ## Building in Developer Mode ## 00:01:45.230 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.230 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.230 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.230 Program python3 found: YES (/usr/bin/python3) 00:01:45.230 Program cat found: YES (/usr/bin/cat) 00:01:45.230 Compiler for C supports arguments -march=native: YES 00:01:45.230 Checking for size of "void *" : 8 00:01:45.230 Checking for size of "void *" : 8 (cached) 00:01:45.230 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:45.230 Library m found: YES 00:01:45.230 Library numa found: YES 00:01:45.230 Has header "numaif.h" : YES 00:01:45.230 Library fdt found: NO 00:01:45.230 Library execinfo found: NO 00:01:45.230 Has header "execinfo.h" : YES 00:01:45.230 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.230 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.230 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.230 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.230 Run-time dependency openssl found: YES 3.0.9 00:01:45.230 Run-time dependency libpcap found: YES 1.10.4 00:01:45.230 Has header "pcap.h" with dependency libpcap: YES 00:01:45.230 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.230 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.230 Compiler for C supports arguments -Wformat: YES 00:01:45.230 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.230 Compiler for C supports arguments -Wformat-security: NO 00:01:45.230 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.230 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.230 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.230 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.230 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.230 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.230 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.230 Compiler for C supports arguments -Wundef: YES 00:01:45.230 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.230 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.230 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.230 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.230 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.230 Program objdump found: YES (/usr/bin/objdump) 00:01:45.230 Compiler for C supports arguments -mavx512f: YES 00:01:45.230 Checking if "AVX512 checking" compiles: YES 00:01:45.230 Fetching value of define "__SSE4_2__" : 1 00:01:45.230 Fetching value of define "__AES__" : 1 00:01:45.230 Fetching value of define "__AVX__" : 1 00:01:45.230 Fetching value of define "__AVX2__" : (undefined) 00:01:45.230 Fetching value of define "__AVX512BW__" : (undefined) 00:01:45.231 Fetching value of define "__AVX512CD__" : (undefined) 00:01:45.231 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:45.231 Fetching value of define "__AVX512F__" : (undefined) 00:01:45.231 Fetching value of define "__AVX512VL__" : (undefined) 00:01:45.231 Fetching value of define "__PCLMUL__" : 1 00:01:45.231 Fetching value of define "__RDRND__" : 1 00:01:45.231 Fetching value of define "__RDSEED__" : (undefined) 00:01:45.231 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.231 Fetching value of define "__znver1__" : (undefined) 00:01:45.231 Fetching value of define "__znver2__" : (undefined) 00:01:45.231 Fetching value of define "__znver3__" : (undefined) 00:01:45.231 Fetching value of define "__znver4__" : (undefined) 00:01:45.231 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.231 Message: lib/log: Defining dependency "log" 00:01:45.231 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.231 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.231 Checking for function "getentropy" : NO 00:01:45.231 Message: lib/eal: Defining dependency "eal" 00:01:45.231 Message: lib/ring: Defining dependency "ring" 00:01:45.231 Message: lib/rcu: Defining dependency "rcu" 00:01:45.231 Message: lib/mempool: Defining dependency "mempool" 00:01:45.231 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.231 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.231 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:45.231 Compiler for C supports arguments -mpclmul: YES 00:01:45.231 Compiler for C supports arguments -maes: YES 00:01:45.231 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.231 Compiler for C supports arguments -mavx512bw: YES 00:01:45.231 Compiler for C supports arguments -mavx512dq: YES 00:01:45.231 Compiler for C supports arguments -mavx512vl: YES 00:01:45.231 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.231 Compiler for C supports arguments -mavx2: YES 00:01:45.231 Compiler for C supports arguments -mavx: YES 00:01:45.231 Message: lib/net: Defining dependency "net" 00:01:45.231 Message: lib/meter: Defining dependency "meter" 00:01:45.231 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.231 Message: lib/pci: Defining dependency "pci" 00:01:45.231 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.231 Message: lib/hash: Defining dependency "hash" 00:01:45.231 Message: lib/timer: Defining dependency "timer" 00:01:45.231 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.231 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.231 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.231 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.231 Message: lib/power: Defining dependency "power" 00:01:45.231 Message: lib/reorder: Defining dependency "reorder" 00:01:45.231 Message: lib/security: Defining dependency "security" 00:01:45.231 Has header "linux/userfaultfd.h" : YES 00:01:45.231 Has header "linux/vduse.h" : YES 00:01:45.231 Message: lib/vhost: Defining dependency "vhost" 00:01:45.231 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.231 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.231 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.231 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.231 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.231 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.231 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.231 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.231 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.231 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.231 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.231 Configuring doxy-api-html.conf using configuration 00:01:45.231 Configuring doxy-api-man.conf using configuration 00:01:45.231 Program mandb found: YES (/usr/bin/mandb) 00:01:45.231 Program sphinx-build found: NO 00:01:45.231 Configuring rte_build_config.h using configuration 00:01:45.231 Message: 00:01:45.231 ================= 00:01:45.231 Applications Enabled 00:01:45.231 ================= 00:01:45.231 00:01:45.231 apps: 00:01:45.231 00:01:45.231 00:01:45.231 Message: 00:01:45.231 ================= 00:01:45.231 Libraries Enabled 00:01:45.231 ================= 00:01:45.231 00:01:45.231 libs: 00:01:45.231 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.231 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.231 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.231 00:01:45.231 Message: 00:01:45.231 =============== 00:01:45.231 Drivers Enabled 00:01:45.231 =============== 00:01:45.231 00:01:45.231 common: 00:01:45.231 00:01:45.231 bus: 00:01:45.231 pci, vdev, 00:01:45.231 mempool: 00:01:45.231 ring, 00:01:45.231 dma: 00:01:45.231 00:01:45.231 net: 00:01:45.231 00:01:45.231 crypto: 00:01:45.231 00:01:45.231 compress: 00:01:45.231 00:01:45.231 vdpa: 00:01:45.231 00:01:45.231 00:01:45.231 Message: 00:01:45.231 ================= 00:01:45.231 Content Skipped 00:01:45.231 ================= 00:01:45.231 00:01:45.231 apps: 00:01:45.231 dumpcap: explicitly disabled via build config 00:01:45.231 graph: explicitly disabled via build config 00:01:45.231 pdump: explicitly disabled via build config 00:01:45.231 proc-info: explicitly disabled via build config 00:01:45.231 test-acl: explicitly disabled via build config 00:01:45.231 test-bbdev: explicitly disabled via build config 00:01:45.231 test-cmdline: explicitly disabled via build config 00:01:45.231 test-compress-perf: explicitly disabled via build config 00:01:45.231 test-crypto-perf: explicitly disabled via build config 00:01:45.231 test-dma-perf: explicitly disabled via build config 00:01:45.231 test-eventdev: explicitly disabled via build config 00:01:45.231 test-fib: explicitly disabled via build config 00:01:45.231 test-flow-perf: explicitly disabled via build config 00:01:45.231 test-gpudev: explicitly disabled via build config 00:01:45.231 test-mldev: explicitly disabled via build config 00:01:45.231 test-pipeline: explicitly disabled via build config 00:01:45.231 test-pmd: explicitly disabled via build config 00:01:45.231 test-regex: explicitly disabled via build config 00:01:45.231 test-sad: explicitly disabled via build config 00:01:45.231 test-security-perf: explicitly disabled via build config 00:01:45.231 00:01:45.231 libs: 00:01:45.231 argparse: explicitly disabled via build config 00:01:45.231 metrics: explicitly disabled via build config 00:01:45.231 acl: explicitly disabled via build config 00:01:45.231 bbdev: explicitly disabled via build config 00:01:45.231 bitratestats: explicitly disabled via build config 00:01:45.231 bpf: explicitly disabled via build config 00:01:45.231 cfgfile: explicitly disabled via build config 00:01:45.231 distributor: explicitly disabled via build config 00:01:45.231 efd: explicitly disabled via build config 00:01:45.231 eventdev: explicitly disabled via build config 00:01:45.231 dispatcher: explicitly disabled via build config 00:01:45.231 gpudev: explicitly disabled via build config 00:01:45.231 gro: explicitly disabled via build config 00:01:45.231 gso: explicitly disabled via build config 00:01:45.231 ip_frag: explicitly disabled via build config 00:01:45.231 jobstats: explicitly disabled via build config 00:01:45.231 latencystats: explicitly disabled via build config 00:01:45.231 lpm: explicitly disabled via build config 00:01:45.231 member: explicitly disabled via build config 00:01:45.231 pcapng: explicitly disabled via build config 00:01:45.231 rawdev: explicitly disabled via build config 00:01:45.231 regexdev: explicitly disabled via build config 00:01:45.231 mldev: explicitly disabled via build config 00:01:45.231 rib: explicitly disabled via build config 00:01:45.231 sched: explicitly disabled via build config 00:01:45.231 stack: explicitly disabled via build config 00:01:45.231 ipsec: explicitly disabled via build config 00:01:45.231 pdcp: explicitly disabled via build config 00:01:45.231 fib: explicitly disabled via build config 00:01:45.231 port: explicitly disabled via build config 00:01:45.231 pdump: explicitly disabled via build config 00:01:45.231 table: explicitly disabled via build config 00:01:45.231 pipeline: explicitly disabled via build config 00:01:45.231 graph: explicitly disabled via build config 00:01:45.231 node: explicitly disabled via build config 00:01:45.231 00:01:45.231 drivers: 00:01:45.231 common/cpt: not in enabled drivers build config 00:01:45.231 common/dpaax: not in enabled drivers build config 00:01:45.231 common/iavf: not in enabled drivers build config 00:01:45.231 common/idpf: not in enabled drivers build config 00:01:45.231 common/ionic: not in enabled drivers build config 00:01:45.231 common/mvep: not in enabled drivers build config 00:01:45.231 common/octeontx: not in enabled drivers build config 00:01:45.231 bus/auxiliary: not in enabled drivers build config 00:01:45.231 bus/cdx: not in enabled drivers build config 00:01:45.231 bus/dpaa: not in enabled drivers build config 00:01:45.231 bus/fslmc: not in enabled drivers build config 00:01:45.231 bus/ifpga: not in enabled drivers build config 00:01:45.231 bus/platform: not in enabled drivers build config 00:01:45.231 bus/uacce: not in enabled drivers build config 00:01:45.231 bus/vmbus: not in enabled drivers build config 00:01:45.231 common/cnxk: not in enabled drivers build config 00:01:45.231 common/mlx5: not in enabled drivers build config 00:01:45.231 common/nfp: not in enabled drivers build config 00:01:45.231 common/nitrox: not in enabled drivers build config 00:01:45.231 common/qat: not in enabled drivers build config 00:01:45.231 common/sfc_efx: not in enabled drivers build config 00:01:45.231 mempool/bucket: not in enabled drivers build config 00:01:45.231 mempool/cnxk: not in enabled drivers build config 00:01:45.231 mempool/dpaa: not in enabled drivers build config 00:01:45.231 mempool/dpaa2: not in enabled drivers build config 00:01:45.231 mempool/octeontx: not in enabled drivers build config 00:01:45.231 mempool/stack: not in enabled drivers build config 00:01:45.231 dma/cnxk: not in enabled drivers build config 00:01:45.231 dma/dpaa: not in enabled drivers build config 00:01:45.231 dma/dpaa2: not in enabled drivers build config 00:01:45.231 dma/hisilicon: not in enabled drivers build config 00:01:45.232 dma/idxd: not in enabled drivers build config 00:01:45.232 dma/ioat: not in enabled drivers build config 00:01:45.232 dma/skeleton: not in enabled drivers build config 00:01:45.232 net/af_packet: not in enabled drivers build config 00:01:45.232 net/af_xdp: not in enabled drivers build config 00:01:45.232 net/ark: not in enabled drivers build config 00:01:45.232 net/atlantic: not in enabled drivers build config 00:01:45.232 net/avp: not in enabled drivers build config 00:01:45.232 net/axgbe: not in enabled drivers build config 00:01:45.232 net/bnx2x: not in enabled drivers build config 00:01:45.232 net/bnxt: not in enabled drivers build config 00:01:45.232 net/bonding: not in enabled drivers build config 00:01:45.232 net/cnxk: not in enabled drivers build config 00:01:45.232 net/cpfl: not in enabled drivers build config 00:01:45.232 net/cxgbe: not in enabled drivers build config 00:01:45.232 net/dpaa: not in enabled drivers build config 00:01:45.232 net/dpaa2: not in enabled drivers build config 00:01:45.232 net/e1000: not in enabled drivers build config 00:01:45.232 net/ena: not in enabled drivers build config 00:01:45.232 net/enetc: not in enabled drivers build config 00:01:45.232 net/enetfec: not in enabled drivers build config 00:01:45.232 net/enic: not in enabled drivers build config 00:01:45.232 net/failsafe: not in enabled drivers build config 00:01:45.232 net/fm10k: not in enabled drivers build config 00:01:45.232 net/gve: not in enabled drivers build config 00:01:45.232 net/hinic: not in enabled drivers build config 00:01:45.232 net/hns3: not in enabled drivers build config 00:01:45.232 net/i40e: not in enabled drivers build config 00:01:45.232 net/iavf: not in enabled drivers build config 00:01:45.232 net/ice: not in enabled drivers build config 00:01:45.232 net/idpf: not in enabled drivers build config 00:01:45.232 net/igc: not in enabled drivers build config 00:01:45.232 net/ionic: not in enabled drivers build config 00:01:45.232 net/ipn3ke: not in enabled drivers build config 00:01:45.232 net/ixgbe: not in enabled drivers build config 00:01:45.232 net/mana: not in enabled drivers build config 00:01:45.232 net/memif: not in enabled drivers build config 00:01:45.232 net/mlx4: not in enabled drivers build config 00:01:45.232 net/mlx5: not in enabled drivers build config 00:01:45.232 net/mvneta: not in enabled drivers build config 00:01:45.232 net/mvpp2: not in enabled drivers build config 00:01:45.232 net/netvsc: not in enabled drivers build config 00:01:45.232 net/nfb: not in enabled drivers build config 00:01:45.232 net/nfp: not in enabled drivers build config 00:01:45.232 net/ngbe: not in enabled drivers build config 00:01:45.232 net/null: not in enabled drivers build config 00:01:45.232 net/octeontx: not in enabled drivers build config 00:01:45.232 net/octeon_ep: not in enabled drivers build config 00:01:45.232 net/pcap: not in enabled drivers build config 00:01:45.232 net/pfe: not in enabled drivers build config 00:01:45.232 net/qede: not in enabled drivers build config 00:01:45.232 net/ring: not in enabled drivers build config 00:01:45.232 net/sfc: not in enabled drivers build config 00:01:45.232 net/softnic: not in enabled drivers build config 00:01:45.232 net/tap: not in enabled drivers build config 00:01:45.232 net/thunderx: not in enabled drivers build config 00:01:45.232 net/txgbe: not in enabled drivers build config 00:01:45.232 net/vdev_netvsc: not in enabled drivers build config 00:01:45.232 net/vhost: not in enabled drivers build config 00:01:45.232 net/virtio: not in enabled drivers build config 00:01:45.232 net/vmxnet3: not in enabled drivers build config 00:01:45.232 raw/*: missing internal dependency, "rawdev" 00:01:45.232 crypto/armv8: not in enabled drivers build config 00:01:45.232 crypto/bcmfs: not in enabled drivers build config 00:01:45.232 crypto/caam_jr: not in enabled drivers build config 00:01:45.232 crypto/ccp: not in enabled drivers build config 00:01:45.232 crypto/cnxk: not in enabled drivers build config 00:01:45.232 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.232 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.232 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.232 crypto/mlx5: not in enabled drivers build config 00:01:45.232 crypto/mvsam: not in enabled drivers build config 00:01:45.232 crypto/nitrox: not in enabled drivers build config 00:01:45.232 crypto/null: not in enabled drivers build config 00:01:45.232 crypto/octeontx: not in enabled drivers build config 00:01:45.232 crypto/openssl: not in enabled drivers build config 00:01:45.232 crypto/scheduler: not in enabled drivers build config 00:01:45.232 crypto/uadk: not in enabled drivers build config 00:01:45.232 crypto/virtio: not in enabled drivers build config 00:01:45.232 compress/isal: not in enabled drivers build config 00:01:45.232 compress/mlx5: not in enabled drivers build config 00:01:45.232 compress/nitrox: not in enabled drivers build config 00:01:45.232 compress/octeontx: not in enabled drivers build config 00:01:45.232 compress/zlib: not in enabled drivers build config 00:01:45.232 regex/*: missing internal dependency, "regexdev" 00:01:45.232 ml/*: missing internal dependency, "mldev" 00:01:45.232 vdpa/ifc: not in enabled drivers build config 00:01:45.232 vdpa/mlx5: not in enabled drivers build config 00:01:45.232 vdpa/nfp: not in enabled drivers build config 00:01:45.232 vdpa/sfc: not in enabled drivers build config 00:01:45.232 event/*: missing internal dependency, "eventdev" 00:01:45.232 baseband/*: missing internal dependency, "bbdev" 00:01:45.232 gpu/*: missing internal dependency, "gpudev" 00:01:45.232 00:01:45.232 00:01:45.232 Build targets in project: 85 00:01:45.232 00:01:45.232 DPDK 24.03.0 00:01:45.232 00:01:45.232 User defined options 00:01:45.232 buildtype : debug 00:01:45.232 default_library : shared 00:01:45.232 libdir : lib 00:01:45.232 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.232 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:45.232 c_link_args : 00:01:45.232 cpu_instruction_set: native 00:01:45.232 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:45.232 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:45.232 enable_docs : false 00:01:45.232 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.232 enable_kmods : false 00:01:45.232 max_lcores : 128 00:01:45.232 tests : false 00:01:45.232 00:01:45.232 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.232 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.232 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.232 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.232 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.232 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.492 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.492 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.492 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.492 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.492 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.492 [10/268] Linking static target lib/librte_kvargs.a 00:01:45.492 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.492 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.492 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.492 [14/268] Linking static target lib/librte_log.a 00:01:45.492 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.492 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.067 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.067 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.067 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.067 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.331 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.331 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.331 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.331 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.331 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.331 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.331 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.331 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.331 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.331 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.331 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.331 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.331 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.331 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.331 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.331 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.331 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.331 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.331 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.331 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.331 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.331 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.331 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.331 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.331 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.331 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.331 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.331 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.331 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.331 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.331 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.331 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.331 [53/268] Linking static target lib/librte_telemetry.a 00:01:46.331 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.331 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.331 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.331 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.331 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.592 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.592 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.592 [61/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.592 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.592 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.592 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.592 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.592 [66/268] Linking target lib/librte_log.so.24.1 00:01:46.852 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.852 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.852 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.852 [70/268] Linking static target lib/librte_pci.a 00:01:46.852 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.125 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.125 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.125 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.125 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.125 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.125 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.125 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.125 [79/268] Linking target lib/librte_kvargs.so.24.1 00:01:47.125 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.125 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.125 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.125 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.125 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.125 [85/268] Linking static target lib/librte_ring.a 00:01:47.387 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.387 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.387 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.387 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.387 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.387 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.387 [92/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.387 [93/268] Linking static target lib/librte_meter.a 00:01:47.387 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.387 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.387 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.387 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.387 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.387 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.387 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.387 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.387 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.387 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:47.387 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.387 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.387 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.387 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.387 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.387 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:47.387 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.387 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.387 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.387 [113/268] Linking static target lib/librte_eal.a 00:01:47.387 [114/268] Linking target lib/librte_telemetry.so.24.1 00:01:47.387 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.649 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.649 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.649 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.649 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.649 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.649 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.649 [122/268] Linking static target lib/librte_rcu.a 00:01:47.649 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.649 [124/268] Linking static target lib/librte_mempool.a 00:01:47.649 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.649 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.649 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.649 [128/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.649 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.649 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:47.649 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.909 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.909 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.909 [134/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.909 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.909 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.909 [137/268] Linking static target lib/librte_net.a 00:01:47.909 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.909 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.909 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.909 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.171 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.171 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.171 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.171 [145/268] Linking static target lib/librte_cmdline.a 00:01:48.171 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.171 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.171 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.430 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.430 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.430 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.430 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.430 [153/268] Linking static target lib/librte_timer.a 00:01:48.430 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.430 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.430 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.430 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.430 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.430 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.430 [160/268] Linking static target lib/librte_dmadev.a 00:01:48.430 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.430 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.688 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.688 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.688 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.688 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.688 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.688 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.688 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.689 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.689 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.689 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.689 [173/268] Linking static target lib/librte_compressdev.a 00:01:48.689 [174/268] Linking static target lib/librte_power.a 00:01:48.689 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.689 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.689 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.689 [178/268] Linking static target lib/librte_hash.a 00:01:48.689 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.946 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.947 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.947 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.947 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.947 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.947 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.947 [186/268] Linking static target lib/librte_reorder.a 00:01:48.947 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.947 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.947 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.947 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.947 [191/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.947 [192/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.947 [193/268] Linking static target lib/librte_mbuf.a 00:01:48.947 [194/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.947 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.205 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:49.205 [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.205 [198/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.205 [199/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.205 [200/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.205 [201/268] Linking static target drivers/librte_bus_pci.a 00:01:49.205 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.205 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.205 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.205 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.205 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:49.205 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.205 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.463 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.463 [210/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.463 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:49.463 [212/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.463 [213/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.463 [214/268] Linking static target lib/librte_security.a 00:01:49.463 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.463 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.720 [217/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.720 [218/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.720 [219/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.720 [220/268] Linking static target lib/librte_cryptodev.a 00:01:49.720 [221/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.720 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.720 [223/268] Linking static target drivers/librte_mempool_ring.a 00:01:49.720 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.720 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.720 [226/268] Linking static target lib/librte_ethdev.a 00:01:50.650 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.023 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.950 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.950 [230/268] Linking target lib/librte_eal.so.24.1 00:01:53.950 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:53.950 [232/268] Linking target lib/librte_ring.so.24.1 00:01:53.950 [233/268] Linking target lib/librte_timer.so.24.1 00:01:53.950 [234/268] Linking target lib/librte_meter.so.24.1 00:01:53.950 [235/268] Linking target lib/librte_pci.so.24.1 00:01:53.950 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:53.950 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:53.950 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.950 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:53.950 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:53.950 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:53.950 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.950 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.207 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:54.207 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:54.207 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:54.207 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:54.207 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:54.207 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:54.207 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:54.465 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:54.465 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:54.465 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:54.465 [254/268] Linking target lib/librte_net.so.24.1 00:01:54.465 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:54.465 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:54.465 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:54.722 [258/268] Linking target lib/librte_security.so.24.1 00:01:54.722 [259/268] Linking target lib/librte_hash.so.24.1 00:01:54.722 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:54.722 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:54.722 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:54.722 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:54.722 [264/268] Linking target lib/librte_power.so.24.1 00:01:57.251 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.251 [266/268] Linking static target lib/librte_vhost.a 00:01:58.629 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.629 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:58.629 INFO: autodetecting backend as ninja 00:01:58.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:59.563 CC lib/ut_mock/mock.o 00:01:59.563 CC lib/log/log.o 00:01:59.563 CC lib/log/log_flags.o 00:01:59.563 CC lib/log/log_deprecated.o 00:01:59.563 CC lib/ut/ut.o 00:01:59.563 LIB libspdk_log.a 00:01:59.563 LIB libspdk_ut.a 00:01:59.563 LIB libspdk_ut_mock.a 00:01:59.563 SO libspdk_ut_mock.so.6.0 00:01:59.563 SO libspdk_log.so.7.0 00:01:59.563 SO libspdk_ut.so.2.0 00:01:59.563 SYMLINK libspdk_ut_mock.so 00:01:59.563 SYMLINK libspdk_ut.so 00:01:59.563 SYMLINK libspdk_log.so 00:01:59.822 CXX lib/trace_parser/trace.o 00:01:59.822 CC lib/ioat/ioat.o 00:01:59.822 CC lib/dma/dma.o 00:01:59.822 CC lib/util/base64.o 00:01:59.822 CC lib/util/bit_array.o 00:01:59.822 CC lib/util/cpuset.o 00:01:59.822 CC lib/util/crc16.o 00:01:59.822 CC lib/util/crc32.o 00:01:59.822 CC lib/util/crc32c.o 00:01:59.822 CC lib/util/crc32_ieee.o 00:01:59.822 CC lib/util/crc64.o 00:01:59.822 CC lib/util/dif.o 00:01:59.822 CC lib/util/fd.o 00:01:59.822 CC lib/util/fd_group.o 00:01:59.822 CC lib/util/file.o 00:01:59.822 CC lib/util/hexlify.o 00:01:59.822 CC lib/util/iov.o 00:01:59.822 CC lib/util/math.o 00:01:59.822 CC lib/util/net.o 00:01:59.822 CC lib/util/pipe.o 00:01:59.822 CC lib/util/strerror_tls.o 00:01:59.822 CC lib/util/string.o 00:01:59.822 CC lib/util/uuid.o 00:01:59.822 CC lib/util/xor.o 00:01:59.822 CC lib/util/zipf.o 00:02:00.079 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.079 CC lib/vfio_user/host/vfio_user.o 00:02:00.079 LIB libspdk_dma.a 00:02:00.079 SO libspdk_dma.so.4.0 00:02:00.079 SYMLINK libspdk_dma.so 00:02:00.079 LIB libspdk_ioat.a 00:02:00.079 SO libspdk_ioat.so.7.0 00:02:00.079 SYMLINK libspdk_ioat.so 00:02:00.337 LIB libspdk_vfio_user.a 00:02:00.337 SO libspdk_vfio_user.so.5.0 00:02:00.337 SYMLINK libspdk_vfio_user.so 00:02:00.337 LIB libspdk_util.a 00:02:00.337 SO libspdk_util.so.10.0 00:02:00.594 SYMLINK libspdk_util.so 00:02:00.853 CC lib/rdma_provider/common.o 00:02:00.853 CC lib/idxd/idxd.o 00:02:00.853 CC lib/vmd/vmd.o 00:02:00.853 CC lib/rdma_utils/rdma_utils.o 00:02:00.853 CC lib/conf/conf.o 00:02:00.853 CC lib/json/json_parse.o 00:02:00.853 CC lib/env_dpdk/env.o 00:02:00.853 CC lib/vmd/led.o 00:02:00.853 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:00.853 CC lib/idxd/idxd_user.o 00:02:00.853 CC lib/env_dpdk/memory.o 00:02:00.853 CC lib/idxd/idxd_kernel.o 00:02:00.853 CC lib/env_dpdk/pci.o 00:02:00.853 CC lib/env_dpdk/init.o 00:02:00.853 CC lib/env_dpdk/threads.o 00:02:00.853 CC lib/env_dpdk/pci_ioat.o 00:02:00.853 CC lib/json/json_util.o 00:02:00.853 CC lib/json/json_write.o 00:02:00.853 CC lib/env_dpdk/pci_virtio.o 00:02:00.853 CC lib/env_dpdk/pci_vmd.o 00:02:00.853 CC lib/env_dpdk/pci_idxd.o 00:02:00.853 CC lib/env_dpdk/pci_event.o 00:02:00.853 CC lib/env_dpdk/sigbus_handler.o 00:02:00.853 CC lib/env_dpdk/pci_dpdk.o 00:02:00.853 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:00.853 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:00.853 LIB libspdk_trace_parser.a 00:02:00.853 SO libspdk_trace_parser.so.5.0 00:02:01.110 LIB libspdk_rdma_provider.a 00:02:01.110 SO libspdk_rdma_provider.so.6.0 00:02:01.110 SYMLINK libspdk_trace_parser.so 00:02:01.110 LIB libspdk_conf.a 00:02:01.110 SO libspdk_conf.so.6.0 00:02:01.110 LIB libspdk_rdma_utils.a 00:02:01.110 SYMLINK libspdk_rdma_provider.so 00:02:01.110 SO libspdk_rdma_utils.so.1.0 00:02:01.110 SYMLINK libspdk_conf.so 00:02:01.110 LIB libspdk_json.a 00:02:01.110 SYMLINK libspdk_rdma_utils.so 00:02:01.110 SO libspdk_json.so.6.0 00:02:01.110 SYMLINK libspdk_json.so 00:02:01.368 LIB libspdk_idxd.a 00:02:01.368 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.368 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.368 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.368 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.368 SO libspdk_idxd.so.12.0 00:02:01.368 SYMLINK libspdk_idxd.so 00:02:01.625 LIB libspdk_vmd.a 00:02:01.625 SO libspdk_vmd.so.6.0 00:02:01.625 SYMLINK libspdk_vmd.so 00:02:01.625 LIB libspdk_jsonrpc.a 00:02:01.625 SO libspdk_jsonrpc.so.6.0 00:02:01.625 SYMLINK libspdk_jsonrpc.so 00:02:01.882 CC lib/rpc/rpc.o 00:02:02.140 LIB libspdk_rpc.a 00:02:02.140 SO libspdk_rpc.so.6.0 00:02:02.140 SYMLINK libspdk_rpc.so 00:02:02.397 CC lib/trace/trace.o 00:02:02.397 CC lib/notify/notify.o 00:02:02.397 CC lib/keyring/keyring.o 00:02:02.397 CC lib/notify/notify_rpc.o 00:02:02.397 CC lib/trace/trace_flags.o 00:02:02.397 CC lib/keyring/keyring_rpc.o 00:02:02.397 CC lib/trace/trace_rpc.o 00:02:02.397 LIB libspdk_notify.a 00:02:02.655 SO libspdk_notify.so.6.0 00:02:02.655 LIB libspdk_keyring.a 00:02:02.655 SYMLINK libspdk_notify.so 00:02:02.655 LIB libspdk_trace.a 00:02:02.655 SO libspdk_keyring.so.1.0 00:02:02.655 SO libspdk_trace.so.10.0 00:02:02.655 SYMLINK libspdk_keyring.so 00:02:02.655 SYMLINK libspdk_trace.so 00:02:02.913 LIB libspdk_env_dpdk.a 00:02:02.913 CC lib/sock/sock.o 00:02:02.913 CC lib/sock/sock_rpc.o 00:02:02.913 CC lib/thread/thread.o 00:02:02.913 CC lib/thread/iobuf.o 00:02:02.913 SO libspdk_env_dpdk.so.15.0 00:02:02.913 SYMLINK libspdk_env_dpdk.so 00:02:03.171 LIB libspdk_sock.a 00:02:03.171 SO libspdk_sock.so.10.0 00:02:03.430 SYMLINK libspdk_sock.so 00:02:03.430 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:03.430 CC lib/nvme/nvme_ctrlr.o 00:02:03.430 CC lib/nvme/nvme_fabric.o 00:02:03.430 CC lib/nvme/nvme_ns_cmd.o 00:02:03.430 CC lib/nvme/nvme_ns.o 00:02:03.430 CC lib/nvme/nvme_pcie_common.o 00:02:03.430 CC lib/nvme/nvme_pcie.o 00:02:03.430 CC lib/nvme/nvme_qpair.o 00:02:03.430 CC lib/nvme/nvme.o 00:02:03.430 CC lib/nvme/nvme_quirks.o 00:02:03.430 CC lib/nvme/nvme_transport.o 00:02:03.430 CC lib/nvme/nvme_discovery.o 00:02:03.430 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:03.430 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:03.430 CC lib/nvme/nvme_tcp.o 00:02:03.430 CC lib/nvme/nvme_opal.o 00:02:03.430 CC lib/nvme/nvme_io_msg.o 00:02:03.430 CC lib/nvme/nvme_poll_group.o 00:02:03.430 CC lib/nvme/nvme_zns.o 00:02:03.430 CC lib/nvme/nvme_stubs.o 00:02:03.430 CC lib/nvme/nvme_auth.o 00:02:03.430 CC lib/nvme/nvme_cuse.o 00:02:03.430 CC lib/nvme/nvme_rdma.o 00:02:03.430 CC lib/nvme/nvme_vfio_user.o 00:02:04.364 LIB libspdk_thread.a 00:02:04.364 SO libspdk_thread.so.10.1 00:02:04.622 SYMLINK libspdk_thread.so 00:02:04.622 CC lib/virtio/virtio.o 00:02:04.622 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.622 CC lib/blob/blobstore.o 00:02:04.622 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.622 CC lib/virtio/virtio_vhost_user.o 00:02:04.622 CC lib/blob/request.o 00:02:04.622 CC lib/blob/zeroes.o 00:02:04.622 CC lib/virtio/virtio_vfio_user.o 00:02:04.622 CC lib/virtio/virtio_pci.o 00:02:04.622 CC lib/blob/blob_bs_dev.o 00:02:04.622 CC lib/init/json_config.o 00:02:04.622 CC lib/accel/accel.o 00:02:04.622 CC lib/accel/accel_rpc.o 00:02:04.622 CC lib/init/subsystem.o 00:02:04.622 CC lib/init/subsystem_rpc.o 00:02:04.622 CC lib/accel/accel_sw.o 00:02:04.622 CC lib/init/rpc.o 00:02:04.879 LIB libspdk_init.a 00:02:04.879 SO libspdk_init.so.5.0 00:02:05.137 LIB libspdk_virtio.a 00:02:05.137 LIB libspdk_vfu_tgt.a 00:02:05.137 SYMLINK libspdk_init.so 00:02:05.137 SO libspdk_vfu_tgt.so.3.0 00:02:05.137 SO libspdk_virtio.so.7.0 00:02:05.137 SYMLINK libspdk_vfu_tgt.so 00:02:05.137 SYMLINK libspdk_virtio.so 00:02:05.137 CC lib/event/app.o 00:02:05.137 CC lib/event/reactor.o 00:02:05.137 CC lib/event/log_rpc.o 00:02:05.137 CC lib/event/app_rpc.o 00:02:05.138 CC lib/event/scheduler_static.o 00:02:05.704 LIB libspdk_event.a 00:02:05.704 SO libspdk_event.so.14.0 00:02:05.704 LIB libspdk_accel.a 00:02:05.704 SYMLINK libspdk_event.so 00:02:05.704 SO libspdk_accel.so.16.0 00:02:05.962 SYMLINK libspdk_accel.so 00:02:05.962 LIB libspdk_nvme.a 00:02:05.962 CC lib/bdev/bdev.o 00:02:05.962 CC lib/bdev/bdev_rpc.o 00:02:05.962 CC lib/bdev/bdev_zone.o 00:02:05.962 CC lib/bdev/part.o 00:02:05.962 CC lib/bdev/scsi_nvme.o 00:02:05.962 SO libspdk_nvme.so.13.1 00:02:06.220 SYMLINK libspdk_nvme.so 00:02:08.136 LIB libspdk_blob.a 00:02:08.136 SO libspdk_blob.so.11.0 00:02:08.136 SYMLINK libspdk_blob.so 00:02:08.136 CC lib/blobfs/blobfs.o 00:02:08.136 CC lib/blobfs/tree.o 00:02:08.136 CC lib/lvol/lvol.o 00:02:08.701 LIB libspdk_bdev.a 00:02:08.701 SO libspdk_bdev.so.16.0 00:02:08.701 SYMLINK libspdk_bdev.so 00:02:08.701 LIB libspdk_blobfs.a 00:02:08.965 SO libspdk_blobfs.so.10.0 00:02:08.965 CC lib/scsi/dev.o 00:02:08.965 CC lib/nbd/nbd.o 00:02:08.965 CC lib/nvmf/ctrlr.o 00:02:08.965 CC lib/scsi/lun.o 00:02:08.965 CC lib/ublk/ublk.o 00:02:08.965 CC lib/nbd/nbd_rpc.o 00:02:08.965 CC lib/nvmf/ctrlr_discovery.o 00:02:08.965 CC lib/ftl/ftl_core.o 00:02:08.965 CC lib/scsi/port.o 00:02:08.965 CC lib/ublk/ublk_rpc.o 00:02:08.965 CC lib/nvmf/ctrlr_bdev.o 00:02:08.965 CC lib/ftl/ftl_init.o 00:02:08.965 CC lib/scsi/scsi.o 00:02:08.965 CC lib/nvmf/subsystem.o 00:02:08.965 CC lib/scsi/scsi_bdev.o 00:02:08.965 CC lib/ftl/ftl_layout.o 00:02:08.965 CC lib/nvmf/nvmf.o 00:02:08.965 CC lib/scsi/scsi_pr.o 00:02:08.965 CC lib/ftl/ftl_debug.o 00:02:08.965 CC lib/scsi/scsi_rpc.o 00:02:08.965 CC lib/nvmf/nvmf_rpc.o 00:02:08.965 CC lib/ftl/ftl_io.o 00:02:08.965 CC lib/nvmf/transport.o 00:02:08.965 CC lib/scsi/task.o 00:02:08.965 CC lib/ftl/ftl_sb.o 00:02:08.965 CC lib/ftl/ftl_l2p.o 00:02:08.965 CC lib/nvmf/tcp.o 00:02:08.965 CC lib/nvmf/stubs.o 00:02:08.965 CC lib/ftl/ftl_l2p_flat.o 00:02:08.965 CC lib/ftl/ftl_nv_cache.o 00:02:08.965 CC lib/nvmf/mdns_server.o 00:02:08.965 CC lib/nvmf/vfio_user.o 00:02:08.965 CC lib/nvmf/rdma.o 00:02:08.965 CC lib/ftl/ftl_band.o 00:02:08.965 CC lib/ftl/ftl_band_ops.o 00:02:08.965 CC lib/nvmf/auth.o 00:02:08.965 CC lib/ftl/ftl_writer.o 00:02:08.965 CC lib/ftl/ftl_rq.o 00:02:08.965 CC lib/ftl/ftl_reloc.o 00:02:08.965 CC lib/ftl/ftl_l2p_cache.o 00:02:08.965 CC lib/ftl/ftl_p2l.o 00:02:08.965 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.965 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.965 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.965 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.965 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.965 SYMLINK libspdk_blobfs.so 00:02:08.965 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.965 LIB libspdk_lvol.a 00:02:08.965 SO libspdk_lvol.so.10.0 00:02:09.224 SYMLINK libspdk_lvol.so 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:09.224 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:09.225 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:09.225 CC lib/ftl/utils/ftl_conf.o 00:02:09.225 CC lib/ftl/utils/ftl_md.o 00:02:09.225 CC lib/ftl/utils/ftl_mempool.o 00:02:09.225 CC lib/ftl/utils/ftl_bitmap.o 00:02:09.225 CC lib/ftl/utils/ftl_property.o 00:02:09.225 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:09.486 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:09.486 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:09.486 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:09.486 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:09.486 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:09.486 CC lib/ftl/base/ftl_base_dev.o 00:02:09.486 CC lib/ftl/base/ftl_base_bdev.o 00:02:09.486 CC lib/ftl/ftl_trace.o 00:02:09.744 LIB libspdk_nbd.a 00:02:09.744 SO libspdk_nbd.so.7.0 00:02:09.744 LIB libspdk_scsi.a 00:02:09.744 SYMLINK libspdk_nbd.so 00:02:09.744 SO libspdk_scsi.so.9.0 00:02:10.002 SYMLINK libspdk_scsi.so 00:02:10.002 LIB libspdk_ublk.a 00:02:10.002 SO libspdk_ublk.so.3.0 00:02:10.002 SYMLINK libspdk_ublk.so 00:02:10.002 CC lib/vhost/vhost.o 00:02:10.002 CC lib/iscsi/conn.o 00:02:10.002 CC lib/vhost/vhost_rpc.o 00:02:10.002 CC lib/iscsi/init_grp.o 00:02:10.002 CC lib/iscsi/iscsi.o 00:02:10.002 CC lib/vhost/vhost_scsi.o 00:02:10.002 CC lib/iscsi/md5.o 00:02:10.002 CC lib/iscsi/param.o 00:02:10.002 CC lib/vhost/vhost_blk.o 00:02:10.002 CC lib/vhost/rte_vhost_user.o 00:02:10.260 CC lib/iscsi/portal_grp.o 00:02:10.260 CC lib/iscsi/tgt_node.o 00:02:10.260 CC lib/iscsi/iscsi_subsystem.o 00:02:10.260 CC lib/iscsi/iscsi_rpc.o 00:02:10.260 CC lib/iscsi/task.o 00:02:10.260 LIB libspdk_ftl.a 00:02:10.517 SO libspdk_ftl.so.9.0 00:02:10.774 SYMLINK libspdk_ftl.so 00:02:11.410 LIB libspdk_vhost.a 00:02:11.410 SO libspdk_vhost.so.8.0 00:02:11.410 LIB libspdk_nvmf.a 00:02:11.410 SYMLINK libspdk_vhost.so 00:02:11.410 SO libspdk_nvmf.so.19.0 00:02:11.668 LIB libspdk_iscsi.a 00:02:11.668 SO libspdk_iscsi.so.8.0 00:02:11.668 SYMLINK libspdk_nvmf.so 00:02:11.668 SYMLINK libspdk_iscsi.so 00:02:12.236 CC module/env_dpdk/env_dpdk_rpc.o 00:02:12.236 CC module/vfu_device/vfu_virtio.o 00:02:12.236 CC module/vfu_device/vfu_virtio_blk.o 00:02:12.236 CC module/vfu_device/vfu_virtio_scsi.o 00:02:12.236 CC module/vfu_device/vfu_virtio_rpc.o 00:02:12.236 CC module/keyring/file/keyring.o 00:02:12.236 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:12.236 CC module/accel/error/accel_error.o 00:02:12.236 CC module/sock/posix/posix.o 00:02:12.236 CC module/keyring/file/keyring_rpc.o 00:02:12.236 CC module/accel/error/accel_error_rpc.o 00:02:12.236 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:12.236 CC module/keyring/linux/keyring.o 00:02:12.236 CC module/scheduler/gscheduler/gscheduler.o 00:02:12.236 CC module/blob/bdev/blob_bdev.o 00:02:12.236 CC module/keyring/linux/keyring_rpc.o 00:02:12.236 CC module/accel/ioat/accel_ioat.o 00:02:12.236 CC module/accel/ioat/accel_ioat_rpc.o 00:02:12.236 CC module/accel/dsa/accel_dsa.o 00:02:12.236 CC module/accel/iaa/accel_iaa.o 00:02:12.236 CC module/accel/dsa/accel_dsa_rpc.o 00:02:12.236 CC module/accel/iaa/accel_iaa_rpc.o 00:02:12.236 LIB libspdk_env_dpdk_rpc.a 00:02:12.236 SO libspdk_env_dpdk_rpc.so.6.0 00:02:12.236 SYMLINK libspdk_env_dpdk_rpc.so 00:02:12.236 LIB libspdk_keyring_file.a 00:02:12.236 LIB libspdk_keyring_linux.a 00:02:12.236 LIB libspdk_scheduler_gscheduler.a 00:02:12.236 LIB libspdk_scheduler_dpdk_governor.a 00:02:12.236 SO libspdk_scheduler_gscheduler.so.4.0 00:02:12.236 SO libspdk_keyring_file.so.1.0 00:02:12.236 SO libspdk_keyring_linux.so.1.0 00:02:12.236 LIB libspdk_accel_error.a 00:02:12.494 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:12.494 LIB libspdk_accel_ioat.a 00:02:12.494 LIB libspdk_scheduler_dynamic.a 00:02:12.494 SO libspdk_accel_error.so.2.0 00:02:12.494 LIB libspdk_accel_iaa.a 00:02:12.494 SO libspdk_accel_ioat.so.6.0 00:02:12.494 SYMLINK libspdk_scheduler_gscheduler.so 00:02:12.494 SO libspdk_scheduler_dynamic.so.4.0 00:02:12.494 SYMLINK libspdk_keyring_file.so 00:02:12.494 SYMLINK libspdk_keyring_linux.so 00:02:12.494 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:12.494 SO libspdk_accel_iaa.so.3.0 00:02:12.494 SYMLINK libspdk_accel_error.so 00:02:12.494 LIB libspdk_accel_dsa.a 00:02:12.494 LIB libspdk_blob_bdev.a 00:02:12.494 SYMLINK libspdk_scheduler_dynamic.so 00:02:12.494 SYMLINK libspdk_accel_ioat.so 00:02:12.494 SO libspdk_accel_dsa.so.5.0 00:02:12.494 SO libspdk_blob_bdev.so.11.0 00:02:12.494 SYMLINK libspdk_accel_iaa.so 00:02:12.494 SYMLINK libspdk_blob_bdev.so 00:02:12.494 SYMLINK libspdk_accel_dsa.so 00:02:12.754 LIB libspdk_vfu_device.a 00:02:12.754 SO libspdk_vfu_device.so.3.0 00:02:12.754 CC module/bdev/null/bdev_null.o 00:02:12.754 CC module/bdev/delay/vbdev_delay.o 00:02:12.754 CC module/bdev/nvme/bdev_nvme.o 00:02:12.754 CC module/bdev/gpt/gpt.o 00:02:12.754 CC module/bdev/error/vbdev_error.o 00:02:12.754 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.754 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.754 CC module/bdev/null/bdev_null_rpc.o 00:02:12.754 CC module/bdev/ftl/bdev_ftl.o 00:02:12.754 CC module/bdev/malloc/bdev_malloc.o 00:02:12.754 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.754 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.754 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.754 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.754 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.754 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.754 CC module/bdev/nvme/nvme_rpc.o 00:02:12.754 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.754 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.754 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.754 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.754 CC module/bdev/aio/bdev_aio.o 00:02:12.754 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.754 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.754 CC module/bdev/nvme/vbdev_opal.o 00:02:12.754 CC module/bdev/split/vbdev_split.o 00:02:12.754 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.754 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.754 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.754 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.754 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.754 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.754 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.754 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.754 CC module/bdev/raid/bdev_raid.o 00:02:12.754 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.754 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.754 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.754 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.754 CC module/bdev/raid/raid0.o 00:02:12.754 CC module/bdev/raid/raid1.o 00:02:12.754 CC module/bdev/raid/concat.o 00:02:13.012 SYMLINK libspdk_vfu_device.so 00:02:13.012 LIB libspdk_sock_posix.a 00:02:13.012 SO libspdk_sock_posix.so.6.0 00:02:13.013 LIB libspdk_bdev_split.a 00:02:13.270 LIB libspdk_blobfs_bdev.a 00:02:13.270 SO libspdk_bdev_split.so.6.0 00:02:13.270 SYMLINK libspdk_sock_posix.so 00:02:13.270 SO libspdk_blobfs_bdev.so.6.0 00:02:13.270 SYMLINK libspdk_bdev_split.so 00:02:13.270 LIB libspdk_bdev_error.a 00:02:13.270 LIB libspdk_bdev_null.a 00:02:13.270 SYMLINK libspdk_blobfs_bdev.so 00:02:13.270 LIB libspdk_bdev_passthru.a 00:02:13.270 LIB libspdk_bdev_aio.a 00:02:13.270 SO libspdk_bdev_error.so.6.0 00:02:13.270 SO libspdk_bdev_null.so.6.0 00:02:13.270 SO libspdk_bdev_aio.so.6.0 00:02:13.270 SO libspdk_bdev_passthru.so.6.0 00:02:13.270 LIB libspdk_bdev_gpt.a 00:02:13.270 SO libspdk_bdev_gpt.so.6.0 00:02:13.270 LIB libspdk_bdev_ftl.a 00:02:13.270 SYMLINK libspdk_bdev_error.so 00:02:13.270 SYMLINK libspdk_bdev_null.so 00:02:13.270 SYMLINK libspdk_bdev_aio.so 00:02:13.270 SYMLINK libspdk_bdev_passthru.so 00:02:13.270 LIB libspdk_bdev_zone_block.a 00:02:13.270 SO libspdk_bdev_ftl.so.6.0 00:02:13.270 SO libspdk_bdev_zone_block.so.6.0 00:02:13.270 LIB libspdk_bdev_delay.a 00:02:13.270 SYMLINK libspdk_bdev_gpt.so 00:02:13.270 LIB libspdk_bdev_iscsi.a 00:02:13.527 LIB libspdk_bdev_malloc.a 00:02:13.527 SO libspdk_bdev_delay.so.6.0 00:02:13.527 SYMLINK libspdk_bdev_ftl.so 00:02:13.527 SO libspdk_bdev_iscsi.so.6.0 00:02:13.527 SO libspdk_bdev_malloc.so.6.0 00:02:13.527 SYMLINK libspdk_bdev_zone_block.so 00:02:13.527 SYMLINK libspdk_bdev_delay.so 00:02:13.527 SYMLINK libspdk_bdev_iscsi.so 00:02:13.527 SYMLINK libspdk_bdev_malloc.so 00:02:13.527 LIB libspdk_bdev_virtio.a 00:02:13.527 SO libspdk_bdev_virtio.so.6.0 00:02:13.527 LIB libspdk_bdev_lvol.a 00:02:13.527 SO libspdk_bdev_lvol.so.6.0 00:02:13.527 SYMLINK libspdk_bdev_virtio.so 00:02:13.527 SYMLINK libspdk_bdev_lvol.so 00:02:14.091 LIB libspdk_bdev_raid.a 00:02:14.091 SO libspdk_bdev_raid.so.6.0 00:02:14.091 SYMLINK libspdk_bdev_raid.so 00:02:15.464 LIB libspdk_bdev_nvme.a 00:02:15.464 SO libspdk_bdev_nvme.so.7.0 00:02:15.464 SYMLINK libspdk_bdev_nvme.so 00:02:15.721 CC module/event/subsystems/vmd/vmd.o 00:02:15.721 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.721 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.721 CC module/event/subsystems/keyring/keyring.o 00:02:15.721 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.721 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.721 CC module/event/subsystems/sock/sock.o 00:02:15.721 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.721 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.994 LIB libspdk_event_keyring.a 00:02:15.994 LIB libspdk_event_vhost_blk.a 00:02:15.994 LIB libspdk_event_vmd.a 00:02:15.994 LIB libspdk_event_vfu_tgt.a 00:02:15.994 LIB libspdk_event_sock.a 00:02:15.994 LIB libspdk_event_scheduler.a 00:02:15.994 SO libspdk_event_keyring.so.1.0 00:02:15.994 LIB libspdk_event_iobuf.a 00:02:15.994 SO libspdk_event_vhost_blk.so.3.0 00:02:15.994 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.994 SO libspdk_event_vmd.so.6.0 00:02:15.994 SO libspdk_event_scheduler.so.4.0 00:02:15.994 SO libspdk_event_sock.so.5.0 00:02:15.994 SO libspdk_event_iobuf.so.3.0 00:02:15.994 SYMLINK libspdk_event_keyring.so 00:02:15.994 SYMLINK libspdk_event_vhost_blk.so 00:02:15.994 SYMLINK libspdk_event_vfu_tgt.so 00:02:15.994 SYMLINK libspdk_event_sock.so 00:02:15.994 SYMLINK libspdk_event_scheduler.so 00:02:15.994 SYMLINK libspdk_event_vmd.so 00:02:15.994 SYMLINK libspdk_event_iobuf.so 00:02:16.252 CC module/event/subsystems/accel/accel.o 00:02:16.252 LIB libspdk_event_accel.a 00:02:16.252 SO libspdk_event_accel.so.6.0 00:02:16.511 SYMLINK libspdk_event_accel.so 00:02:16.511 CC module/event/subsystems/bdev/bdev.o 00:02:16.769 LIB libspdk_event_bdev.a 00:02:16.769 SO libspdk_event_bdev.so.6.0 00:02:16.769 SYMLINK libspdk_event_bdev.so 00:02:17.027 CC module/event/subsystems/ublk/ublk.o 00:02:17.027 CC module/event/subsystems/nbd/nbd.o 00:02:17.027 CC module/event/subsystems/scsi/scsi.o 00:02:17.027 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.027 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.027 LIB libspdk_event_nbd.a 00:02:17.027 LIB libspdk_event_ublk.a 00:02:17.027 LIB libspdk_event_scsi.a 00:02:17.284 SO libspdk_event_nbd.so.6.0 00:02:17.284 SO libspdk_event_ublk.so.3.0 00:02:17.284 SO libspdk_event_scsi.so.6.0 00:02:17.284 SYMLINK libspdk_event_nbd.so 00:02:17.284 SYMLINK libspdk_event_ublk.so 00:02:17.284 SYMLINK libspdk_event_scsi.so 00:02:17.284 LIB libspdk_event_nvmf.a 00:02:17.284 SO libspdk_event_nvmf.so.6.0 00:02:17.284 SYMLINK libspdk_event_nvmf.so 00:02:17.284 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.540 LIB libspdk_event_vhost_scsi.a 00:02:17.540 LIB libspdk_event_iscsi.a 00:02:17.540 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.540 SO libspdk_event_iscsi.so.6.0 00:02:17.540 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.540 SYMLINK libspdk_event_iscsi.so 00:02:17.799 SO libspdk.so.6.0 00:02:17.799 SYMLINK libspdk.so 00:02:17.799 CC app/trace_record/trace_record.o 00:02:17.799 CXX app/trace/trace.o 00:02:17.799 CC app/spdk_lspci/spdk_lspci.o 00:02:17.799 CC app/spdk_nvme_perf/perf.o 00:02:17.799 CC app/spdk_nvme_identify/identify.o 00:02:17.799 TEST_HEADER include/spdk/accel.h 00:02:17.799 TEST_HEADER include/spdk/accel_module.h 00:02:17.799 TEST_HEADER include/spdk/assert.h 00:02:17.799 CC test/rpc_client/rpc_client_test.o 00:02:17.799 TEST_HEADER include/spdk/barrier.h 00:02:17.799 TEST_HEADER include/spdk/base64.h 00:02:18.062 TEST_HEADER include/spdk/bdev.h 00:02:18.062 TEST_HEADER include/spdk/bdev_module.h 00:02:18.062 CC app/spdk_top/spdk_top.o 00:02:18.062 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.062 TEST_HEADER include/spdk/bit_array.h 00:02:18.062 TEST_HEADER include/spdk/bit_pool.h 00:02:18.062 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.062 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.062 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.062 TEST_HEADER include/spdk/blobfs.h 00:02:18.062 TEST_HEADER include/spdk/blob.h 00:02:18.062 TEST_HEADER include/spdk/conf.h 00:02:18.062 TEST_HEADER include/spdk/config.h 00:02:18.062 TEST_HEADER include/spdk/cpuset.h 00:02:18.062 TEST_HEADER include/spdk/crc16.h 00:02:18.062 TEST_HEADER include/spdk/crc32.h 00:02:18.062 TEST_HEADER include/spdk/crc64.h 00:02:18.062 TEST_HEADER include/spdk/dif.h 00:02:18.062 TEST_HEADER include/spdk/dma.h 00:02:18.062 TEST_HEADER include/spdk/endian.h 00:02:18.062 TEST_HEADER include/spdk/env.h 00:02:18.062 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.062 TEST_HEADER include/spdk/event.h 00:02:18.062 TEST_HEADER include/spdk/fd_group.h 00:02:18.062 TEST_HEADER include/spdk/fd.h 00:02:18.062 TEST_HEADER include/spdk/file.h 00:02:18.062 TEST_HEADER include/spdk/ftl.h 00:02:18.062 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.062 TEST_HEADER include/spdk/hexlify.h 00:02:18.062 TEST_HEADER include/spdk/histogram_data.h 00:02:18.062 TEST_HEADER include/spdk/idxd.h 00:02:18.062 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.062 TEST_HEADER include/spdk/ioat.h 00:02:18.062 TEST_HEADER include/spdk/init.h 00:02:18.062 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.062 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.062 TEST_HEADER include/spdk/json.h 00:02:18.062 TEST_HEADER include/spdk/keyring.h 00:02:18.062 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.062 TEST_HEADER include/spdk/keyring_module.h 00:02:18.062 TEST_HEADER include/spdk/likely.h 00:02:18.062 TEST_HEADER include/spdk/log.h 00:02:18.062 TEST_HEADER include/spdk/memory.h 00:02:18.062 TEST_HEADER include/spdk/lvol.h 00:02:18.062 TEST_HEADER include/spdk/mmio.h 00:02:18.062 TEST_HEADER include/spdk/nbd.h 00:02:18.062 TEST_HEADER include/spdk/net.h 00:02:18.062 TEST_HEADER include/spdk/notify.h 00:02:18.062 TEST_HEADER include/spdk/nvme.h 00:02:18.062 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.062 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.062 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.062 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.062 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.062 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.062 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.062 TEST_HEADER include/spdk/nvmf.h 00:02:18.062 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.062 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.062 TEST_HEADER include/spdk/opal.h 00:02:18.062 TEST_HEADER include/spdk/opal_spec.h 00:02:18.062 TEST_HEADER include/spdk/pci_ids.h 00:02:18.062 TEST_HEADER include/spdk/pipe.h 00:02:18.062 TEST_HEADER include/spdk/queue.h 00:02:18.062 TEST_HEADER include/spdk/reduce.h 00:02:18.062 TEST_HEADER include/spdk/rpc.h 00:02:18.062 TEST_HEADER include/spdk/scheduler.h 00:02:18.062 TEST_HEADER include/spdk/scsi.h 00:02:18.062 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.062 TEST_HEADER include/spdk/sock.h 00:02:18.062 TEST_HEADER include/spdk/stdinc.h 00:02:18.062 TEST_HEADER include/spdk/string.h 00:02:18.062 TEST_HEADER include/spdk/thread.h 00:02:18.062 TEST_HEADER include/spdk/trace_parser.h 00:02:18.062 TEST_HEADER include/spdk/trace.h 00:02:18.062 TEST_HEADER include/spdk/tree.h 00:02:18.062 TEST_HEADER include/spdk/ublk.h 00:02:18.062 TEST_HEADER include/spdk/util.h 00:02:18.062 TEST_HEADER include/spdk/uuid.h 00:02:18.062 TEST_HEADER include/spdk/version.h 00:02:18.062 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.062 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.062 TEST_HEADER include/spdk/vhost.h 00:02:18.062 TEST_HEADER include/spdk/vmd.h 00:02:18.062 TEST_HEADER include/spdk/xor.h 00:02:18.062 TEST_HEADER include/spdk/zipf.h 00:02:18.062 CXX test/cpp_headers/accel.o 00:02:18.062 CXX test/cpp_headers/accel_module.o 00:02:18.062 CXX test/cpp_headers/assert.o 00:02:18.062 CXX test/cpp_headers/barrier.o 00:02:18.062 CXX test/cpp_headers/base64.o 00:02:18.062 CXX test/cpp_headers/bdev.o 00:02:18.062 CXX test/cpp_headers/bdev_module.o 00:02:18.062 CXX test/cpp_headers/bdev_zone.o 00:02:18.062 CXX test/cpp_headers/bit_array.o 00:02:18.062 CXX test/cpp_headers/bit_pool.o 00:02:18.062 CXX test/cpp_headers/blob_bdev.o 00:02:18.062 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.062 CXX test/cpp_headers/blobfs.o 00:02:18.062 CXX test/cpp_headers/blob.o 00:02:18.062 CXX test/cpp_headers/conf.o 00:02:18.062 CXX test/cpp_headers/config.o 00:02:18.062 CXX test/cpp_headers/cpuset.o 00:02:18.062 CXX test/cpp_headers/crc16.o 00:02:18.062 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.062 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.062 CC app/spdk_dd/spdk_dd.o 00:02:18.062 CC app/nvmf_tgt/nvmf_main.o 00:02:18.062 CXX test/cpp_headers/crc32.o 00:02:18.062 CC examples/util/zipf/zipf.o 00:02:18.062 CC examples/ioat/perf/perf.o 00:02:18.062 CC app/spdk_tgt/spdk_tgt.o 00:02:18.062 CC test/app/histogram_perf/histogram_perf.o 00:02:18.062 CC test/thread/poller_perf/poller_perf.o 00:02:18.062 CC examples/ioat/verify/verify.o 00:02:18.062 CC test/env/pci/pci_ut.o 00:02:18.062 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:18.062 CC test/app/jsoncat/jsoncat.o 00:02:18.062 CC test/app/stub/stub.o 00:02:18.062 CC test/env/memory/memory_ut.o 00:02:18.062 CC app/fio/nvme/fio_plugin.o 00:02:18.062 CC test/env/vtophys/vtophys.o 00:02:18.062 CC test/dma/test_dma/test_dma.o 00:02:18.062 CC app/fio/bdev/fio_plugin.o 00:02:18.062 CC test/app/bdev_svc/bdev_svc.o 00:02:18.325 LINK spdk_lspci 00:02:18.325 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.325 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.325 LINK rpc_client_test 00:02:18.325 LINK spdk_nvme_discover 00:02:18.325 LINK zipf 00:02:18.325 LINK poller_perf 00:02:18.325 LINK jsoncat 00:02:18.325 LINK histogram_perf 00:02:18.325 CXX test/cpp_headers/crc64.o 00:02:18.325 LINK nvmf_tgt 00:02:18.325 LINK vtophys 00:02:18.325 CXX test/cpp_headers/dif.o 00:02:18.325 CXX test/cpp_headers/dma.o 00:02:18.586 LINK interrupt_tgt 00:02:18.586 CXX test/cpp_headers/endian.o 00:02:18.586 CXX test/cpp_headers/env_dpdk.o 00:02:18.586 CXX test/cpp_headers/env.o 00:02:18.586 CXX test/cpp_headers/event.o 00:02:18.586 LINK env_dpdk_post_init 00:02:18.586 CXX test/cpp_headers/fd_group.o 00:02:18.586 CXX test/cpp_headers/fd.o 00:02:18.586 CXX test/cpp_headers/file.o 00:02:18.586 LINK spdk_trace_record 00:02:18.586 CXX test/cpp_headers/ftl.o 00:02:18.586 CXX test/cpp_headers/gpt_spec.o 00:02:18.586 LINK stub 00:02:18.586 CXX test/cpp_headers/hexlify.o 00:02:18.586 CXX test/cpp_headers/histogram_data.o 00:02:18.586 LINK ioat_perf 00:02:18.586 LINK iscsi_tgt 00:02:18.586 CXX test/cpp_headers/idxd_spec.o 00:02:18.586 CXX test/cpp_headers/idxd.o 00:02:18.586 LINK spdk_tgt 00:02:18.586 LINK bdev_svc 00:02:18.586 CXX test/cpp_headers/init.o 00:02:18.586 LINK verify 00:02:18.586 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.586 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.586 CXX test/cpp_headers/ioat.o 00:02:18.848 CXX test/cpp_headers/ioat_spec.o 00:02:18.848 CXX test/cpp_headers/iscsi_spec.o 00:02:18.848 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.848 CXX test/cpp_headers/json.o 00:02:18.848 LINK spdk_trace 00:02:18.848 CXX test/cpp_headers/jsonrpc.o 00:02:18.848 LINK spdk_dd 00:02:18.848 CXX test/cpp_headers/keyring.o 00:02:18.848 CXX test/cpp_headers/keyring_module.o 00:02:18.848 CXX test/cpp_headers/likely.o 00:02:18.848 CXX test/cpp_headers/log.o 00:02:18.848 CXX test/cpp_headers/lvol.o 00:02:18.848 CXX test/cpp_headers/memory.o 00:02:18.848 CXX test/cpp_headers/mmio.o 00:02:18.848 CXX test/cpp_headers/nbd.o 00:02:18.848 CXX test/cpp_headers/net.o 00:02:18.848 CXX test/cpp_headers/notify.o 00:02:18.848 CXX test/cpp_headers/nvme.o 00:02:18.848 LINK pci_ut 00:02:18.848 CXX test/cpp_headers/nvme_intel.o 00:02:18.848 CXX test/cpp_headers/nvme_ocssd.o 00:02:18.848 LINK test_dma 00:02:18.848 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:18.848 CXX test/cpp_headers/nvme_spec.o 00:02:18.848 CXX test/cpp_headers/nvme_zns.o 00:02:18.848 CXX test/cpp_headers/nvmf_cmd.o 00:02:18.848 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:18.848 CXX test/cpp_headers/nvmf.o 00:02:18.848 CXX test/cpp_headers/nvmf_spec.o 00:02:19.120 CXX test/cpp_headers/nvmf_transport.o 00:02:19.120 CXX test/cpp_headers/opal.o 00:02:19.120 CXX test/cpp_headers/opal_spec.o 00:02:19.120 CC test/event/event_perf/event_perf.o 00:02:19.120 CC test/event/reactor/reactor.o 00:02:19.120 CXX test/cpp_headers/pci_ids.o 00:02:19.120 LINK nvme_fuzz 00:02:19.120 CXX test/cpp_headers/pipe.o 00:02:19.120 CC examples/idxd/perf/perf.o 00:02:19.120 CXX test/cpp_headers/queue.o 00:02:19.120 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.120 CXX test/cpp_headers/reduce.o 00:02:19.120 LINK spdk_nvme 00:02:19.120 LINK spdk_bdev 00:02:19.120 CC test/event/reactor_perf/reactor_perf.o 00:02:19.120 CC examples/sock/hello_world/hello_sock.o 00:02:19.120 CC examples/thread/thread/thread_ex.o 00:02:19.120 CXX test/cpp_headers/rpc.o 00:02:19.382 CXX test/cpp_headers/scheduler.o 00:02:19.382 CC test/event/app_repeat/app_repeat.o 00:02:19.382 CXX test/cpp_headers/scsi.o 00:02:19.382 CXX test/cpp_headers/scsi_spec.o 00:02:19.382 CXX test/cpp_headers/sock.o 00:02:19.382 CXX test/cpp_headers/stdinc.o 00:02:19.382 CXX test/cpp_headers/string.o 00:02:19.382 CXX test/cpp_headers/thread.o 00:02:19.382 CXX test/cpp_headers/trace.o 00:02:19.382 CXX test/cpp_headers/trace_parser.o 00:02:19.382 CXX test/cpp_headers/tree.o 00:02:19.382 CC test/event/scheduler/scheduler.o 00:02:19.382 CC examples/vmd/led/led.o 00:02:19.382 CXX test/cpp_headers/ublk.o 00:02:19.382 CXX test/cpp_headers/util.o 00:02:19.382 CXX test/cpp_headers/uuid.o 00:02:19.382 CXX test/cpp_headers/version.o 00:02:19.382 CXX test/cpp_headers/vfio_user_pci.o 00:02:19.382 LINK reactor 00:02:19.382 CXX test/cpp_headers/vfio_user_spec.o 00:02:19.382 LINK spdk_nvme_perf 00:02:19.382 CXX test/cpp_headers/vhost.o 00:02:19.382 CXX test/cpp_headers/vmd.o 00:02:19.382 CXX test/cpp_headers/xor.o 00:02:19.382 CXX test/cpp_headers/zipf.o 00:02:19.382 CC app/vhost/vhost.o 00:02:19.382 LINK event_perf 00:02:19.382 LINK mem_callbacks 00:02:19.382 LINK lsvmd 00:02:19.643 LINK reactor_perf 00:02:19.643 LINK spdk_nvme_identify 00:02:19.643 LINK app_repeat 00:02:19.643 LINK vhost_fuzz 00:02:19.643 LINK spdk_top 00:02:19.643 LINK led 00:02:19.643 CC test/nvme/e2edp/nvme_dp.o 00:02:19.643 CC test/nvme/reset/reset.o 00:02:19.643 CC test/nvme/overhead/overhead.o 00:02:19.643 CC test/nvme/sgl/sgl.o 00:02:19.643 CC test/nvme/aer/aer.o 00:02:19.643 CC test/nvme/startup/startup.o 00:02:19.643 CC test/nvme/reserve/reserve.o 00:02:19.643 CC test/nvme/err_injection/err_injection.o 00:02:19.643 LINK hello_sock 00:02:19.643 CC test/blobfs/mkfs/mkfs.o 00:02:19.643 CC test/accel/dif/dif.o 00:02:19.904 CC test/nvme/simple_copy/simple_copy.o 00:02:19.904 LINK thread 00:02:19.904 CC test/nvme/connect_stress/connect_stress.o 00:02:19.904 CC test/lvol/esnap/esnap.o 00:02:19.904 CC test/nvme/boot_partition/boot_partition.o 00:02:19.904 LINK scheduler 00:02:19.904 LINK vhost 00:02:19.904 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.904 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.904 CC test/nvme/fdp/fdp.o 00:02:19.904 CC test/nvme/compliance/nvme_compliance.o 00:02:19.904 CC test/nvme/cuse/cuse.o 00:02:19.904 LINK idxd_perf 00:02:19.904 LINK reserve 00:02:19.904 LINK err_injection 00:02:19.904 LINK startup 00:02:20.163 LINK boot_partition 00:02:20.163 LINK mkfs 00:02:20.163 LINK sgl 00:02:20.163 LINK memory_ut 00:02:20.163 LINK fused_ordering 00:02:20.163 LINK overhead 00:02:20.163 LINK nvme_dp 00:02:20.163 LINK simple_copy 00:02:20.163 LINK connect_stress 00:02:20.163 LINK doorbell_aers 00:02:20.163 LINK reset 00:02:20.163 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:20.163 CC examples/nvme/abort/abort.o 00:02:20.163 LINK aer 00:02:20.163 CC examples/nvme/reconnect/reconnect.o 00:02:20.163 CC examples/nvme/hello_world/hello_world.o 00:02:20.163 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:20.163 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:20.163 CC examples/nvme/arbitration/arbitration.o 00:02:20.163 CC examples/nvme/hotplug/hotplug.o 00:02:20.163 LINK fdp 00:02:20.420 CC examples/accel/perf/accel_perf.o 00:02:20.420 LINK nvme_compliance 00:02:20.420 CC examples/blob/hello_world/hello_blob.o 00:02:20.420 LINK dif 00:02:20.420 CC examples/blob/cli/blobcli.o 00:02:20.420 LINK pmr_persistence 00:02:20.420 LINK cmb_copy 00:02:20.678 LINK hotplug 00:02:20.678 LINK hello_world 00:02:20.678 LINK hello_blob 00:02:20.678 LINK arbitration 00:02:20.678 LINK reconnect 00:02:20.678 LINK abort 00:02:20.678 LINK accel_perf 00:02:20.678 LINK nvme_manage 00:02:20.678 CC test/bdev/bdevio/bdevio.o 00:02:20.936 LINK blobcli 00:02:21.194 LINK iscsi_fuzz 00:02:21.194 CC examples/bdev/hello_world/hello_bdev.o 00:02:21.194 CC examples/bdev/bdevperf/bdevperf.o 00:02:21.194 LINK bdevio 00:02:21.451 LINK hello_bdev 00:02:21.451 LINK cuse 00:02:22.015 LINK bdevperf 00:02:22.272 CC examples/nvmf/nvmf/nvmf.o 00:02:22.530 LINK nvmf 00:02:25.063 LINK esnap 00:02:25.323 00:02:25.323 real 0m49.434s 00:02:25.323 user 10m9.354s 00:02:25.323 sys 2m30.372s 00:02:25.323 17:45:11 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:25.323 17:45:11 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.323 ************************************ 00:02:25.323 END TEST make 00:02:25.323 ************************************ 00:02:25.323 17:45:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.323 17:45:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.323 17:45:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.323 17:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.323 17:45:11 -- pm/common@44 -- $ pid=2572188 00:02:25.323 17:45:11 -- pm/common@50 -- $ kill -TERM 2572188 00:02:25.323 17:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.323 17:45:11 -- pm/common@44 -- $ pid=2572189 00:02:25.323 17:45:11 -- pm/common@50 -- $ kill -TERM 2572189 00:02:25.323 17:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.323 17:45:11 -- pm/common@44 -- $ pid=2572192 00:02:25.323 17:45:11 -- pm/common@50 -- $ kill -TERM 2572192 00:02:25.323 17:45:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.323 17:45:11 -- pm/common@44 -- $ pid=2572220 00:02:25.323 17:45:11 -- pm/common@50 -- $ sudo -E kill -TERM 2572220 00:02:25.323 17:45:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.323 17:45:11 -- nvmf/common.sh@7 -- # uname -s 00:02:25.323 17:45:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.323 17:45:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.323 17:45:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.323 17:45:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.323 17:45:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.323 17:45:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.323 17:45:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.323 17:45:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.323 17:45:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.323 17:45:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.323 17:45:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:25.323 17:45:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:25.323 17:45:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.323 17:45:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.323 17:45:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.323 17:45:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.323 17:45:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.323 17:45:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.323 17:45:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.323 17:45:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.323 17:45:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.323 17:45:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.323 17:45:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.323 17:45:11 -- paths/export.sh@5 -- # export PATH 00:02:25.323 17:45:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.323 17:45:11 -- nvmf/common.sh@47 -- # : 0 00:02:25.323 17:45:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.323 17:45:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.323 17:45:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.323 17:45:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.323 17:45:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.323 17:45:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.323 17:45:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.323 17:45:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.323 17:45:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.323 17:45:11 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.323 17:45:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.323 17:45:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.323 17:45:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.323 17:45:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.323 17:45:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.323 17:45:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.323 17:45:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.323 17:45:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.323 17:45:11 -- spdk/autotest.sh@48 -- # udevadm_pid=2628305 00:02:25.323 17:45:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.323 17:45:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.323 17:45:11 -- pm/common@17 -- # local monitor 00:02:25.323 17:45:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@21 -- # date +%s 00:02:25.323 17:45:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.323 17:45:11 -- pm/common@21 -- # date +%s 00:02:25.323 17:45:11 -- pm/common@25 -- # sleep 1 00:02:25.323 17:45:11 -- pm/common@21 -- # date +%s 00:02:25.323 17:45:11 -- pm/common@21 -- # date +%s 00:02:25.323 17:45:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721835911 00:02:25.324 17:45:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721835911 00:02:25.324 17:45:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721835911 00:02:25.324 17:45:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721835911 00:02:25.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721835911_collect-vmstat.pm.log 00:02:25.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721835911_collect-cpu-load.pm.log 00:02:25.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721835911_collect-cpu-temp.pm.log 00:02:25.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721835911_collect-bmc-pm.bmc.pm.log 00:02:26.700 17:45:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.700 17:45:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.700 17:45:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:26.700 17:45:12 -- common/autotest_common.sh@10 -- # set +x 00:02:26.700 17:45:12 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.700 17:45:12 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:26.700 17:45:12 -- common/autotest_common.sh@10 -- # set +x 00:02:26.700 17:45:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.700 17:45:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.700 17:45:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.700 17:45:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.700 17:45:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.700 17:45:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.700 17:45:12 -- common/autotest_common.sh@1453 -- # uname 00:02:26.700 17:45:12 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:02:26.700 17:45:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.700 17:45:12 -- common/autotest_common.sh@1473 -- # uname 00:02:26.700 17:45:12 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:02:26.700 17:45:12 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.700 17:45:12 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.700 17:45:12 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.700 17:45:12 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.700 17:45:12 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.700 --rc lcov_branch_coverage=1 00:02:26.700 --rc lcov_function_coverage=1 00:02:26.700 --rc genhtml_branch_coverage=1 00:02:26.700 --rc genhtml_function_coverage=1 00:02:26.700 --rc genhtml_legend=1 00:02:26.700 --rc geninfo_all_blocks=1 00:02:26.700 ' 00:02:26.700 17:45:12 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.700 --rc lcov_branch_coverage=1 00:02:26.700 --rc lcov_function_coverage=1 00:02:26.700 --rc genhtml_branch_coverage=1 00:02:26.700 --rc genhtml_function_coverage=1 00:02:26.700 --rc genhtml_legend=1 00:02:26.700 --rc geninfo_all_blocks=1 00:02:26.700 ' 00:02:26.700 17:45:12 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.700 --rc lcov_branch_coverage=1 00:02:26.700 --rc lcov_function_coverage=1 00:02:26.700 --rc genhtml_branch_coverage=1 00:02:26.700 --rc genhtml_function_coverage=1 00:02:26.700 --rc genhtml_legend=1 00:02:26.700 --rc geninfo_all_blocks=1 00:02:26.700 --no-external' 00:02:26.700 17:45:12 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.700 --rc lcov_branch_coverage=1 00:02:26.700 --rc lcov_function_coverage=1 00:02:26.700 --rc genhtml_branch_coverage=1 00:02:26.700 --rc genhtml_function_coverage=1 00:02:26.700 --rc genhtml_legend=1 00:02:26.700 --rc geninfo_all_blocks=1 00:02:26.700 --no-external' 00:02:26.700 17:45:12 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.700 lcov: LCOV version 1.14 00:02:26.700 17:45:12 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.520 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:56.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.521 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:59.807 17:45:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:59.807 17:45:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:59.807 17:45:45 -- common/autotest_common.sh@10 -- # set +x 00:02:59.808 17:45:45 -- spdk/autotest.sh@91 -- # rm -f 00:02:59.808 17:45:45 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.742 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:00.742 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:00.742 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:00.742 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:00.742 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:00.742 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:00.742 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:00.742 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:00.742 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:00.742 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:00.742 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:00.742 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:00.742 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:00.742 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:00.742 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:00.742 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:00.742 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:01.001 17:45:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:01.001 17:45:47 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:01.001 17:45:47 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:01.001 17:45:47 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:01.001 17:45:47 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:01.001 17:45:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:01.001 17:45:47 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:01.001 17:45:47 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.001 17:45:47 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:01.001 17:45:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:01.001 17:45:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:01.001 17:45:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:01.001 17:45:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:01.001 17:45:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:01.001 17:45:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.001 No valid GPT data, bailing 00:03:01.001 17:45:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.001 17:45:47 -- scripts/common.sh@391 -- # pt= 00:03:01.001 17:45:47 -- scripts/common.sh@392 -- # return 1 00:03:01.001 17:45:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.001 1+0 records in 00:03:01.001 1+0 records out 00:03:01.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00158193 s, 663 MB/s 00:03:01.001 17:45:47 -- spdk/autotest.sh@118 -- # sync 00:03:01.001 17:45:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.001 17:45:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.001 17:45:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.905 17:45:49 -- spdk/autotest.sh@124 -- # uname -s 00:03:02.905 17:45:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:02.905 17:45:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.905 17:45:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.905 17:45:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.905 17:45:49 -- common/autotest_common.sh@10 -- # set +x 00:03:02.905 ************************************ 00:03:02.905 START TEST setup.sh 00:03:02.905 ************************************ 00:03:02.905 17:45:49 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.905 * Looking for test storage... 00:03:02.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.905 17:45:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:02.905 17:45:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.905 17:45:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.905 17:45:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.905 17:45:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.905 17:45:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.164 ************************************ 00:03:03.164 START TEST acl 00:03:03.164 ************************************ 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:03.164 * Looking for test storage... 00:03:03.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.164 17:45:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:03.164 17:45:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:03.164 17:45:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.164 17:45:49 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.540 17:45:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.540 17:45:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:04.540 17:45:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.540 17:45:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:04.540 17:45:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.540 17:45:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:05.922 Hugepages 00:03:05.922 node hugesize free / total 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 00:03:05.922 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:05.922 17:45:51 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:05.922 17:45:51 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.922 17:45:51 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.922 17:45:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.922 ************************************ 00:03:05.922 START TEST denied 00:03:05.922 ************************************ 00:03:05.922 17:45:51 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:05.922 17:45:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:05.922 17:45:51 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:05.922 17:45:51 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:05.922 17:45:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.922 17:45:51 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.298 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.298 17:45:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.827 00:03:09.827 real 0m3.641s 00:03:09.827 user 0m1.065s 00:03:09.827 sys 0m1.670s 00:03:09.827 17:45:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.827 17:45:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:09.827 ************************************ 00:03:09.827 END TEST denied 00:03:09.827 ************************************ 00:03:09.827 17:45:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:09.827 17:45:55 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.827 17:45:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.827 17:45:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.827 ************************************ 00:03:09.827 START TEST allowed 00:03:09.827 ************************************ 00:03:09.827 17:45:55 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:09.827 17:45:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:09.827 17:45:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:09.827 17:45:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:09.827 17:45:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.827 17:45:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.732 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.732 17:45:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:11.732 17:45:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:11.732 17:45:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:11.732 17:45:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.732 17:45:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.638 00:03:13.638 real 0m3.780s 00:03:13.638 user 0m0.986s 00:03:13.638 sys 0m1.694s 00:03:13.638 17:45:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.638 17:45:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:13.638 ************************************ 00:03:13.638 END TEST allowed 00:03:13.638 ************************************ 00:03:13.638 00:03:13.638 real 0m10.281s 00:03:13.638 user 0m3.141s 00:03:13.638 sys 0m5.203s 00:03:13.638 17:45:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.638 17:45:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.638 ************************************ 00:03:13.638 END TEST acl 00:03:13.638 ************************************ 00:03:13.638 17:45:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:13.638 17:45:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.638 17:45:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.638 17:45:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.638 ************************************ 00:03:13.638 START TEST hugepages 00:03:13.638 ************************************ 00:03:13.638 17:45:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:13.638 * Looking for test storage... 00:03:13.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39068716 kB' 'MemAvailable: 42987088 kB' 'Buffers: 2704 kB' 'Cached: 14605968 kB' 'SwapCached: 0 kB' 'Active: 11452868 kB' 'Inactive: 3693412 kB' 'Active(anon): 11013096 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541296 kB' 'Mapped: 180268 kB' 'Shmem: 10475488 kB' 'KReclaimable: 429624 kB' 'Slab: 818856 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 389232 kB' 'KernelStack: 12688 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 12158708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197036 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.638 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.639 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.640 17:45:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:13.640 17:45:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.640 17:45:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.640 17:45:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.640 ************************************ 00:03:13.640 START TEST default_setup 00:03:13.640 ************************************ 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.640 17:45:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.618 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.618 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.618 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.618 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.618 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.618 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.876 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.876 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.876 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.876 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.818 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41165224 kB' 'MemAvailable: 45083596 kB' 'Buffers: 2704 kB' 'Cached: 14606068 kB' 'SwapCached: 0 kB' 'Active: 11466696 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026924 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554672 kB' 'Mapped: 179980 kB' 'Shmem: 10475588 kB' 'KReclaimable: 429624 kB' 'Slab: 818536 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388912 kB' 'KernelStack: 12768 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.818 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.819 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41166272 kB' 'MemAvailable: 45084644 kB' 'Buffers: 2704 kB' 'Cached: 14606068 kB' 'SwapCached: 0 kB' 'Active: 11466568 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026796 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554508 kB' 'Mapped: 179948 kB' 'Shmem: 10475588 kB' 'KReclaimable: 429624 kB' 'Slab: 818528 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388904 kB' 'KernelStack: 12704 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.820 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.821 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41166808 kB' 'MemAvailable: 45085180 kB' 'Buffers: 2704 kB' 'Cached: 14606076 kB' 'SwapCached: 0 kB' 'Active: 11466344 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026572 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554312 kB' 'Mapped: 179948 kB' 'Shmem: 10475596 kB' 'KReclaimable: 429624 kB' 'Slab: 818528 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388904 kB' 'KernelStack: 12704 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.822 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.823 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.824 nr_hugepages=1024 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.824 resv_hugepages=0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.824 surplus_hugepages=0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.824 anon_hugepages=0 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41166808 kB' 'MemAvailable: 45085180 kB' 'Buffers: 2704 kB' 'Cached: 14606112 kB' 'SwapCached: 0 kB' 'Active: 11466272 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026500 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554220 kB' 'Mapped: 179800 kB' 'Shmem: 10475632 kB' 'KReclaimable: 429624 kB' 'Slab: 818564 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388940 kB' 'KernelStack: 12752 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.824 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.825 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18514484 kB' 'MemUsed: 14315400 kB' 'SwapCached: 0 kB' 'Active: 7756092 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400336 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10820716 kB' 'Mapped: 126040 kB' 'AnonPages: 277452 kB' 'Shmem: 7126152 kB' 'KernelStack: 7976 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331796 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.826 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.827 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.087 node0=1024 expecting 1024 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.087 00:03:16.087 real 0m2.461s 00:03:16.087 user 0m0.600s 00:03:16.087 sys 0m0.938s 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.087 17:46:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:16.087 ************************************ 00:03:16.087 END TEST default_setup 00:03:16.087 ************************************ 00:03:16.087 17:46:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.087 17:46:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.087 17:46:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.087 17:46:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.087 ************************************ 00:03:16.087 START TEST per_node_1G_alloc 00:03:16.087 ************************************ 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.087 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.088 17:46:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.022 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.022 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.022 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.022 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.022 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.022 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.022 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.022 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.022 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.022 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.022 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.022 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.022 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.022 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.022 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.022 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.022 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.285 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41192596 kB' 'MemAvailable: 45110968 kB' 'Buffers: 2704 kB' 'Cached: 14606176 kB' 'SwapCached: 0 kB' 'Active: 11466688 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026916 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554496 kB' 'Mapped: 179952 kB' 'Shmem: 10475696 kB' 'KReclaimable: 429624 kB' 'Slab: 818408 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388784 kB' 'KernelStack: 12704 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41193020 kB' 'MemAvailable: 45111392 kB' 'Buffers: 2704 kB' 'Cached: 14606180 kB' 'SwapCached: 0 kB' 'Active: 11466612 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026840 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554472 kB' 'Mapped: 179888 kB' 'Shmem: 10475700 kB' 'KReclaimable: 429624 kB' 'Slab: 818400 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388776 kB' 'KernelStack: 12752 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.286 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.287 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41192768 kB' 'MemAvailable: 45111140 kB' 'Buffers: 2704 kB' 'Cached: 14606196 kB' 'SwapCached: 0 kB' 'Active: 11466512 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026740 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554312 kB' 'Mapped: 179812 kB' 'Shmem: 10475716 kB' 'KReclaimable: 429624 kB' 'Slab: 818392 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388768 kB' 'KernelStack: 12768 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.288 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.289 nr_hugepages=1024 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.289 resv_hugepages=0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.289 surplus_hugepages=0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.289 anon_hugepages=0 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41193128 kB' 'MemAvailable: 45111500 kB' 'Buffers: 2704 kB' 'Cached: 14606220 kB' 'SwapCached: 0 kB' 'Active: 11466536 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026764 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554316 kB' 'Mapped: 179812 kB' 'Shmem: 10475740 kB' 'KReclaimable: 429624 kB' 'Slab: 818392 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388768 kB' 'KernelStack: 12768 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.289 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19566300 kB' 'MemUsed: 13263584 kB' 'SwapCached: 0 kB' 'Active: 7756024 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400268 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10820716 kB' 'Mapped: 126052 kB' 'AnonPages: 277324 kB' 'Shmem: 7126152 kB' 'KernelStack: 7944 kB' 'PageTables: 5156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331592 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.290 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21626828 kB' 'MemUsed: 6084996 kB' 'SwapCached: 0 kB' 'Active: 3710324 kB' 'Inactive: 354604 kB' 'Active(anon): 3626308 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 354604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3788252 kB' 'Mapped: 53760 kB' 'AnonPages: 276736 kB' 'Shmem: 3349632 kB' 'KernelStack: 4808 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280232 kB' 'Slab: 486800 kB' 'SReclaimable: 280232 kB' 'SUnreclaim: 206568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.291 node0=512 expecting 512 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.291 node1=512 expecting 512 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.291 00:03:17.291 real 0m1.394s 00:03:17.291 user 0m0.594s 00:03:17.291 sys 0m0.759s 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.291 17:46:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.291 ************************************ 00:03:17.291 END TEST per_node_1G_alloc 00:03:17.292 ************************************ 00:03:17.550 17:46:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:17.550 17:46:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.550 17:46:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.550 17:46:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.550 ************************************ 00:03:17.550 START TEST even_2G_alloc 00:03:17.550 ************************************ 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.550 17:46:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.485 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.485 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.485 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.485 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.485 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.485 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.485 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.485 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.485 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.485 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.485 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.485 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.485 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.485 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.485 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.485 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.485 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.748 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.748 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.748 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.748 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.748 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41197316 kB' 'MemAvailable: 45115688 kB' 'Buffers: 2704 kB' 'Cached: 14606316 kB' 'SwapCached: 0 kB' 'Active: 11466752 kB' 'Inactive: 3693412 kB' 'Active(anon): 11026980 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554268 kB' 'Mapped: 179840 kB' 'Shmem: 10475836 kB' 'KReclaimable: 429624 kB' 'Slab: 818092 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388468 kB' 'KernelStack: 12768 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.749 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41197724 kB' 'MemAvailable: 45116096 kB' 'Buffers: 2704 kB' 'Cached: 14606320 kB' 'SwapCached: 0 kB' 'Active: 11466896 kB' 'Inactive: 3693412 kB' 'Active(anon): 11027124 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554396 kB' 'Mapped: 179824 kB' 'Shmem: 10475840 kB' 'KReclaimable: 429624 kB' 'Slab: 818092 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388468 kB' 'KernelStack: 12784 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.750 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.751 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41197788 kB' 'MemAvailable: 45116160 kB' 'Buffers: 2704 kB' 'Cached: 14606336 kB' 'SwapCached: 0 kB' 'Active: 11466868 kB' 'Inactive: 3693412 kB' 'Active(anon): 11027096 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554332 kB' 'Mapped: 179824 kB' 'Shmem: 10475856 kB' 'KReclaimable: 429624 kB' 'Slab: 818160 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388536 kB' 'KernelStack: 12768 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.752 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.753 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.754 nr_hugepages=1024 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.754 resv_hugepages=0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.754 surplus_hugepages=0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.754 anon_hugepages=0 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.754 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41198692 kB' 'MemAvailable: 45117064 kB' 'Buffers: 2704 kB' 'Cached: 14606360 kB' 'SwapCached: 0 kB' 'Active: 11466848 kB' 'Inactive: 3693412 kB' 'Active(anon): 11027076 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554296 kB' 'Mapped: 179824 kB' 'Shmem: 10475880 kB' 'KReclaimable: 429624 kB' 'Slab: 818160 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388536 kB' 'KernelStack: 12752 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12170856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.755 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.756 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19550932 kB' 'MemUsed: 13278952 kB' 'SwapCached: 0 kB' 'Active: 7756132 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400376 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10820788 kB' 'Mapped: 126064 kB' 'AnonPages: 277252 kB' 'Shmem: 7126224 kB' 'KernelStack: 7928 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331544 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.757 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21648280 kB' 'MemUsed: 6063544 kB' 'SwapCached: 0 kB' 'Active: 3710808 kB' 'Inactive: 354604 kB' 'Active(anon): 3626792 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 354604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3788296 kB' 'Mapped: 53760 kB' 'AnonPages: 277152 kB' 'Shmem: 3349676 kB' 'KernelStack: 4824 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280232 kB' 'Slab: 486616 kB' 'SReclaimable: 280232 kB' 'SUnreclaim: 206384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.758 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.759 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.018 node0=512 expecting 512 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.018 node1=512 expecting 512 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.018 00:03:19.018 real 0m1.438s 00:03:19.018 user 0m0.576s 00:03:19.018 sys 0m0.826s 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.018 17:46:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.018 ************************************ 00:03:19.018 END TEST even_2G_alloc 00:03:19.018 ************************************ 00:03:19.018 17:46:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:19.018 17:46:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.018 17:46:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.018 17:46:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.018 ************************************ 00:03:19.018 START TEST odd_alloc 00:03:19.018 ************************************ 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.018 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.019 17:46:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.955 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.955 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.955 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.955 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.955 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.955 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.956 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.956 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.956 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.956 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.956 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.956 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.956 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.956 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.956 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.956 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.956 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.219 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41179728 kB' 'MemAvailable: 45098100 kB' 'Buffers: 2704 kB' 'Cached: 14614644 kB' 'SwapCached: 0 kB' 'Active: 11472464 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032692 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551532 kB' 'Mapped: 178932 kB' 'Shmem: 10484164 kB' 'KReclaimable: 429624 kB' 'Slab: 817964 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388340 kB' 'KernelStack: 12768 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12165240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.220 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41181664 kB' 'MemAvailable: 45100036 kB' 'Buffers: 2704 kB' 'Cached: 14614644 kB' 'SwapCached: 0 kB' 'Active: 11472472 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032700 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551412 kB' 'Mapped: 178884 kB' 'Shmem: 10484164 kB' 'KReclaimable: 429624 kB' 'Slab: 817948 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388324 kB' 'KernelStack: 12912 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12167276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.221 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.222 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41179928 kB' 'MemAvailable: 45098300 kB' 'Buffers: 2704 kB' 'Cached: 14614660 kB' 'SwapCached: 0 kB' 'Active: 11475380 kB' 'Inactive: 3693412 kB' 'Active(anon): 11035608 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555120 kB' 'Mapped: 179320 kB' 'Shmem: 10484180 kB' 'KReclaimable: 429624 kB' 'Slab: 817948 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388324 kB' 'KernelStack: 13136 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12170140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.223 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.224 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:20.225 nr_hugepages=1025 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.225 resv_hugepages=0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.225 surplus_hugepages=0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.225 anon_hugepages=0 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41176800 kB' 'MemAvailable: 45095172 kB' 'Buffers: 2704 kB' 'Cached: 14614684 kB' 'SwapCached: 0 kB' 'Active: 11479028 kB' 'Inactive: 3693412 kB' 'Active(anon): 11039256 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558276 kB' 'Mapped: 179656 kB' 'Shmem: 10484204 kB' 'KReclaimable: 429624 kB' 'Slab: 817956 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388332 kB' 'KernelStack: 13200 kB' 'PageTables: 9568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 12172416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.225 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.226 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19562488 kB' 'MemUsed: 13267396 kB' 'SwapCached: 0 kB' 'Active: 7756760 kB' 'Inactive: 3338808 kB' 'Active(anon): 7401004 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10820892 kB' 'Mapped: 125632 kB' 'AnonPages: 277392 kB' 'Shmem: 7126328 kB' 'KernelStack: 8120 kB' 'PageTables: 5568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331516 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.227 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.228 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 21612728 kB' 'MemUsed: 6099096 kB' 'SwapCached: 0 kB' 'Active: 3718588 kB' 'Inactive: 354604 kB' 'Active(anon): 3634572 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 354604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3796496 kB' 'Mapped: 53764 kB' 'AnonPages: 276716 kB' 'Shmem: 3357876 kB' 'KernelStack: 4760 kB' 'PageTables: 2456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280232 kB' 'Slab: 486408 kB' 'SReclaimable: 280232 kB' 'SUnreclaim: 206176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.229 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.489 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:20.490 node0=512 expecting 513 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:20.490 node1=513 expecting 512 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.490 00:03:20.490 real 0m1.434s 00:03:20.490 user 0m0.632s 00:03:20.490 sys 0m0.763s 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.490 17:46:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.490 ************************************ 00:03:20.490 END TEST odd_alloc 00:03:20.490 ************************************ 00:03:20.490 17:46:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:20.490 17:46:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.490 17:46:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.490 17:46:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.490 ************************************ 00:03:20.491 START TEST custom_alloc 00:03:20.491 ************************************ 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.491 17:46:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.428 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.428 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.428 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.428 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.428 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.428 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.428 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.428 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.428 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.428 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.428 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.428 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.428 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.428 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.428 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.428 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.428 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40153700 kB' 'MemAvailable: 44072072 kB' 'Buffers: 2704 kB' 'Cached: 14614776 kB' 'SwapCached: 0 kB' 'Active: 11471864 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032092 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550928 kB' 'Mapped: 178828 kB' 'Shmem: 10484296 kB' 'KReclaimable: 429624 kB' 'Slab: 817936 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388312 kB' 'KernelStack: 12768 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12164136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.690 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40154044 kB' 'MemAvailable: 44072416 kB' 'Buffers: 2704 kB' 'Cached: 14614780 kB' 'SwapCached: 0 kB' 'Active: 11471716 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031944 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550784 kB' 'Mapped: 178820 kB' 'Shmem: 10484300 kB' 'KReclaimable: 429624 kB' 'Slab: 817936 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388312 kB' 'KernelStack: 12800 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12164156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.691 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.692 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40154260 kB' 'MemAvailable: 44072632 kB' 'Buffers: 2704 kB' 'Cached: 14614796 kB' 'SwapCached: 0 kB' 'Active: 11471712 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031940 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550796 kB' 'Mapped: 178820 kB' 'Shmem: 10484316 kB' 'KReclaimable: 429624 kB' 'Slab: 818000 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388376 kB' 'KernelStack: 12816 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12164176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.693 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.694 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:21.695 nr_hugepages=1536 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.695 resv_hugepages=0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.695 surplus_hugepages=0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.695 anon_hugepages=0 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.695 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 40155820 kB' 'MemAvailable: 44074192 kB' 'Buffers: 2704 kB' 'Cached: 14614796 kB' 'SwapCached: 0 kB' 'Active: 11471632 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031860 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550516 kB' 'Mapped: 178820 kB' 'Shmem: 10484316 kB' 'KReclaimable: 429624 kB' 'Slab: 818000 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388376 kB' 'KernelStack: 12832 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 12164196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.696 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19568944 kB' 'MemUsed: 13260940 kB' 'SwapCached: 0 kB' 'Active: 7756324 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400568 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10821032 kB' 'Mapped: 125480 kB' 'AnonPages: 277236 kB' 'Shmem: 7126468 kB' 'KernelStack: 8024 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331556 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.697 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.698 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.699 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20587552 kB' 'MemUsed: 7124272 kB' 'SwapCached: 0 kB' 'Active: 3715428 kB' 'Inactive: 354604 kB' 'Active(anon): 3631412 kB' 'Inactive(anon): 0 kB' 'Active(file): 84016 kB' 'Inactive(file): 354604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3796512 kB' 'Mapped: 53340 kB' 'AnonPages: 273564 kB' 'Shmem: 3357892 kB' 'KernelStack: 4792 kB' 'PageTables: 2636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 280232 kB' 'Slab: 486444 kB' 'SReclaimable: 280232 kB' 'SUnreclaim: 206212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.957 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.958 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.959 node0=512 expecting 512 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:21.959 node1=1024 expecting 1024 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:21.959 00:03:21.959 real 0m1.430s 00:03:21.959 user 0m0.592s 00:03:21.959 sys 0m0.799s 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.959 17:46:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.959 ************************************ 00:03:21.959 END TEST custom_alloc 00:03:21.959 ************************************ 00:03:21.959 17:46:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:21.959 17:46:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.959 17:46:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.959 17:46:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.959 ************************************ 00:03:21.959 START TEST no_shrink_alloc 00:03:21.959 ************************************ 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.959 17:46:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.893 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.893 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.893 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.893 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.893 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.893 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.893 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.154 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.154 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.154 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.154 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.154 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.154 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.154 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.154 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.154 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.154 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41200252 kB' 'MemAvailable: 45118624 kB' 'Buffers: 2704 kB' 'Cached: 14614900 kB' 'SwapCached: 0 kB' 'Active: 11472784 kB' 'Inactive: 3693412 kB' 'Active(anon): 11033012 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551840 kB' 'Mapped: 178840 kB' 'Shmem: 10484420 kB' 'KReclaimable: 429624 kB' 'Slab: 818184 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388560 kB' 'KernelStack: 12832 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12163732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197112 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.154 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41200956 kB' 'MemAvailable: 45119328 kB' 'Buffers: 2704 kB' 'Cached: 14614900 kB' 'SwapCached: 0 kB' 'Active: 11471692 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031920 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550632 kB' 'Mapped: 178832 kB' 'Shmem: 10484420 kB' 'KReclaimable: 429624 kB' 'Slab: 818160 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388536 kB' 'KernelStack: 12768 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12163880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:23.155 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.156 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41200704 kB' 'MemAvailable: 45119076 kB' 'Buffers: 2704 kB' 'Cached: 14614900 kB' 'SwapCached: 0 kB' 'Active: 11471712 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031940 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550688 kB' 'Mapped: 178832 kB' 'Shmem: 10484420 kB' 'KReclaimable: 429624 kB' 'Slab: 818212 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388588 kB' 'KernelStack: 12816 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12163904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.157 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.158 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.159 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.419 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.420 nr_hugepages=1024 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.420 resv_hugepages=0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.420 surplus_hugepages=0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.420 anon_hugepages=0 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41200816 kB' 'MemAvailable: 45119188 kB' 'Buffers: 2704 kB' 'Cached: 14614944 kB' 'SwapCached: 0 kB' 'Active: 11471956 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032184 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550944 kB' 'Mapped: 178832 kB' 'Shmem: 10484464 kB' 'KReclaimable: 429624 kB' 'Slab: 818212 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388588 kB' 'KernelStack: 12832 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12164292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.420 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.421 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18515312 kB' 'MemUsed: 14314572 kB' 'SwapCached: 0 kB' 'Active: 7756032 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400276 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10821152 kB' 'Mapped: 125480 kB' 'AnonPages: 276816 kB' 'Shmem: 7126588 kB' 'KernelStack: 8040 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331660 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.422 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.423 node0=1024 expecting 1024 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.423 17:46:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.358 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.358 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.358 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.358 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.358 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.358 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.358 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.358 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.358 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:24.358 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.358 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:24.358 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:24.358 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:24.358 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:24.358 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:24.358 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:24.358 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.621 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41207468 kB' 'MemAvailable: 45125840 kB' 'Buffers: 2704 kB' 'Cached: 14615016 kB' 'SwapCached: 0 kB' 'Active: 11471928 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032156 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550800 kB' 'Mapped: 178896 kB' 'Shmem: 10484536 kB' 'KReclaimable: 429624 kB' 'Slab: 817988 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388364 kB' 'KernelStack: 12848 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12164508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.621 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41207860 kB' 'MemAvailable: 45126232 kB' 'Buffers: 2704 kB' 'Cached: 14615020 kB' 'SwapCached: 0 kB' 'Active: 11472552 kB' 'Inactive: 3693412 kB' 'Active(anon): 11032780 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551436 kB' 'Mapped: 178896 kB' 'Shmem: 10484540 kB' 'KReclaimable: 429624 kB' 'Slab: 817980 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388356 kB' 'KernelStack: 12832 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12164524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.622 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41207564 kB' 'MemAvailable: 45125936 kB' 'Buffers: 2704 kB' 'Cached: 14615024 kB' 'SwapCached: 0 kB' 'Active: 11471392 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031620 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550216 kB' 'Mapped: 178836 kB' 'Shmem: 10484544 kB' 'KReclaimable: 429624 kB' 'Slab: 818004 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388380 kB' 'KernelStack: 12832 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12164548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.623 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.624 nr_hugepages=1024 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.624 resv_hugepages=0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.624 surplus_hugepages=0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.624 anon_hugepages=0 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.624 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41207408 kB' 'MemAvailable: 45125780 kB' 'Buffers: 2704 kB' 'Cached: 14615060 kB' 'SwapCached: 0 kB' 'Active: 11471736 kB' 'Inactive: 3693412 kB' 'Active(anon): 11031964 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3693412 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550532 kB' 'Mapped: 178836 kB' 'Shmem: 10484580 kB' 'KReclaimable: 429624 kB' 'Slab: 818004 kB' 'SReclaimable: 429624 kB' 'SUnreclaim: 388380 kB' 'KernelStack: 12848 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 12164568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 41664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1803868 kB' 'DirectMap2M: 18038784 kB' 'DirectMap1G: 49283072 kB' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.625 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18522848 kB' 'MemUsed: 14307036 kB' 'SwapCached: 0 kB' 'Active: 7756060 kB' 'Inactive: 3338808 kB' 'Active(anon): 7400304 kB' 'Inactive(anon): 0 kB' 'Active(file): 355756 kB' 'Inactive(file): 3338808 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10821256 kB' 'Mapped: 125480 kB' 'AnonPages: 276708 kB' 'Shmem: 7126692 kB' 'KernelStack: 8056 kB' 'PageTables: 5168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 149392 kB' 'Slab: 331588 kB' 'SReclaimable: 149392 kB' 'SUnreclaim: 182196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:24.626 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.627 node0=1024 expecting 1024 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.627 00:03:24.627 real 0m2.824s 00:03:24.627 user 0m1.160s 00:03:24.627 sys 0m1.591s 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.627 17:46:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.627 ************************************ 00:03:24.627 END TEST no_shrink_alloc 00:03:24.627 ************************************ 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.627 17:46:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.627 00:03:24.627 real 0m11.372s 00:03:24.627 user 0m4.317s 00:03:24.627 sys 0m5.924s 00:03:24.627 17:46:10 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.627 17:46:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.627 ************************************ 00:03:24.627 END TEST hugepages 00:03:24.627 ************************************ 00:03:24.885 17:46:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.885 17:46:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.885 17:46:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.885 17:46:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.885 ************************************ 00:03:24.885 START TEST driver 00:03:24.885 ************************************ 00:03:24.885 17:46:10 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.885 * Looking for test storage... 00:03:24.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.885 17:46:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:24.885 17:46:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.885 17:46:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.415 17:46:13 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:27.415 17:46:13 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.415 17:46:13 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.415 17:46:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:27.415 ************************************ 00:03:27.415 START TEST guess_driver 00:03:27.415 ************************************ 00:03:27.415 17:46:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:27.415 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:27.415 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:27.415 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:27.415 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:27.416 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:27.416 Looking for driver=vfio-pci 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.416 17:46:13 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.350 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.609 17:46:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.581 17:46:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.135 00:03:32.135 real 0m4.829s 00:03:32.135 user 0m1.058s 00:03:32.135 sys 0m1.794s 00:03:32.135 17:46:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.135 17:46:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.135 ************************************ 00:03:32.135 END TEST guess_driver 00:03:32.135 ************************************ 00:03:32.135 00:03:32.135 real 0m7.373s 00:03:32.135 user 0m1.599s 00:03:32.135 sys 0m2.801s 00:03:32.135 17:46:18 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.135 17:46:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.135 ************************************ 00:03:32.135 END TEST driver 00:03:32.135 ************************************ 00:03:32.135 17:46:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:32.135 17:46:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.135 17:46:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.135 17:46:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.135 ************************************ 00:03:32.135 START TEST devices 00:03:32.135 ************************************ 00:03:32.135 17:46:18 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:32.135 * Looking for test storage... 00:03:32.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.135 17:46:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:32.135 17:46:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:32.135 17:46:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.135 17:46:18 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.039 17:46:19 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:34.039 17:46:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:34.039 17:46:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:34.039 17:46:19 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:34.039 No valid GPT data, bailing 00:03:34.039 17:46:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.039 17:46:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:34.039 17:46:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:34.040 17:46:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:34.040 17:46:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:34.040 17:46:19 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:34.040 17:46:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:34.040 17:46:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.040 17:46:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.040 17:46:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:34.040 ************************************ 00:03:34.040 START TEST nvme_mount 00:03:34.040 ************************************ 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.040 17:46:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:34.978 Creating new GPT entries in memory. 00:03:34.978 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.978 other utilities. 00:03:34.978 17:46:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.978 17:46:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.978 17:46:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.978 17:46:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.978 17:46:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:35.915 Creating new GPT entries in memory. 00:03:35.915 The operation has completed successfully. 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2648265 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:35.915 17:46:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.915 17:46:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.849 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:37.107 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:37.107 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:37.364 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:37.364 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:37.364 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:37.364 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:37.364 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:37.364 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:37.364 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.364 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:37.364 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.622 17:46:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:38.555 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.813 17:46:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.746 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:39.747 17:46:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.006 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.006 00:03:40.006 real 0m6.272s 00:03:40.006 user 0m1.453s 00:03:40.006 sys 0m2.385s 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.006 17:46:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:40.006 ************************************ 00:03:40.006 END TEST nvme_mount 00:03:40.006 ************************************ 00:03:40.006 17:46:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:40.006 17:46:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.006 17:46:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.006 17:46:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.006 ************************************ 00:03:40.006 START TEST dm_mount 00:03:40.006 ************************************ 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.006 17:46:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:41.377 Creating new GPT entries in memory. 00:03:41.378 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.378 other utilities. 00:03:41.378 17:46:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.378 17:46:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.378 17:46:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.378 17:46:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.378 17:46:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:42.310 Creating new GPT entries in memory. 00:03:42.310 The operation has completed successfully. 00:03:42.310 17:46:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.310 17:46:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.310 17:46:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.310 17:46:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.310 17:46:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:43.245 The operation has completed successfully. 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2650661 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.245 17:46:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.178 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.178 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.179 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:44.437 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.695 17:46:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.630 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:03:45.631 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:45.889 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.889 17:46:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:45.889 00:03:45.889 real 0m5.771s 00:03:45.889 user 0m0.946s 00:03:45.889 sys 0m1.655s 00:03:45.889 17:46:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.889 17:46:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:45.889 ************************************ 00:03:45.889 END TEST dm_mount 00:03:45.889 ************************************ 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.889 17:46:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.147 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:46.147 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:46.147 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:46.147 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:46.147 17:46:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.148 17:46:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:46.148 00:03:46.148 real 0m13.960s 00:03:46.148 user 0m3.069s 00:03:46.148 sys 0m5.055s 00:03:46.148 17:46:32 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.148 17:46:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.148 ************************************ 00:03:46.148 END TEST devices 00:03:46.148 ************************************ 00:03:46.148 00:03:46.148 real 0m43.214s 00:03:46.148 user 0m12.231s 00:03:46.148 sys 0m19.125s 00:03:46.148 17:46:32 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.148 17:46:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.148 ************************************ 00:03:46.148 END TEST setup.sh 00:03:46.148 ************************************ 00:03:46.148 17:46:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.519 Hugepages 00:03:47.519 node hugesize free / total 00:03:47.519 node0 1048576kB 0 / 0 00:03:47.519 node0 2048kB 2048 / 2048 00:03:47.519 node1 1048576kB 0 / 0 00:03:47.519 node1 2048kB 0 / 0 00:03:47.519 00:03:47.519 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.519 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:47.519 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:47.519 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:47.519 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:47.519 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:47.519 17:46:33 -- spdk/autotest.sh@130 -- # uname -s 00:03:47.519 17:46:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:47.519 17:46:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:47.519 17:46:33 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.451 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.451 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.451 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.451 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.709 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.709 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.709 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.709 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.709 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:49.646 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.646 17:46:35 -- common/autotest_common.sh@1530 -- # sleep 1 00:03:51.027 17:46:36 -- common/autotest_common.sh@1531 -- # bdfs=() 00:03:51.027 17:46:36 -- common/autotest_common.sh@1531 -- # local bdfs 00:03:51.027 17:46:36 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.027 17:46:36 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:03:51.027 17:46:36 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:51.027 17:46:36 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:51.027 17:46:36 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.027 17:46:36 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.027 17:46:36 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:51.027 17:46:36 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:51.027 17:46:36 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:03:51.027 17:46:36 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.011 Waiting for block devices as requested 00:03:52.011 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:52.011 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:52.011 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:52.270 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:52.270 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:52.270 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:52.270 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:52.270 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:52.529 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:52.529 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:52.788 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:52.788 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:52.788 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:52.788 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:53.047 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:53.047 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:53.047 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:53.306 17:46:39 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:03:53.306 17:46:39 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1500 -- # grep 0000:0b:00.0/nvme/nvme 00:03:53.306 17:46:39 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:53.306 17:46:39 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:03:53.306 17:46:39 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1543 -- # grep oacs 00:03:53.306 17:46:39 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:03:53.306 17:46:39 -- common/autotest_common.sh@1543 -- # oacs=' 0xf' 00:03:53.306 17:46:39 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:03:53.306 17:46:39 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:03:53.306 17:46:39 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:03:53.306 17:46:39 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:03:53.306 17:46:39 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:03:53.306 17:46:39 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:03:53.306 17:46:39 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:03:53.306 17:46:39 -- common/autotest_common.sh@1555 -- # continue 00:03:53.306 17:46:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:53.306 17:46:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.306 17:46:39 -- common/autotest_common.sh@10 -- # set +x 00:03:53.306 17:46:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:53.306 17:46:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.306 17:46:39 -- common/autotest_common.sh@10 -- # set +x 00:03:53.306 17:46:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.679 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.679 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.679 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.613 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.613 17:46:41 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:55.613 17:46:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.613 17:46:41 -- common/autotest_common.sh@10 -- # set +x 00:03:55.613 17:46:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:55.613 17:46:41 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:03:55.613 17:46:41 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.613 17:46:41 -- common/autotest_common.sh@1575 -- # bdfs=() 00:03:55.613 17:46:41 -- common/autotest_common.sh@1575 -- # local bdfs 00:03:55.613 17:46:41 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:03:55.613 17:46:41 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:55.613 17:46:41 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:55.613 17:46:41 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.613 17:46:41 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.613 17:46:41 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:55.871 17:46:41 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:55.871 17:46:41 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:03:55.871 17:46:41 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:03:55.871 17:46:41 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:55.871 17:46:41 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:03:55.871 17:46:41 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:55.871 17:46:41 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:03:55.871 17:46:41 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:0b:00.0 00:03:55.871 17:46:41 -- common/autotest_common.sh@1590 -- # [[ -z 0000:0b:00.0 ]] 00:03:55.871 17:46:41 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=2655849 00:03:55.871 17:46:41 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.871 17:46:41 -- common/autotest_common.sh@1596 -- # waitforlisten 2655849 00:03:55.871 17:46:41 -- common/autotest_common.sh@829 -- # '[' -z 2655849 ']' 00:03:55.871 17:46:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.871 17:46:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.871 17:46:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.871 17:46:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.871 17:46:41 -- common/autotest_common.sh@10 -- # set +x 00:03:55.871 [2024-07-24 17:46:41.946153] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:03:55.871 [2024-07-24 17:46:41.946235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655849 ] 00:03:55.871 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.871 [2024-07-24 17:46:42.004293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.871 [2024-07-24 17:46:42.110559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.129 17:46:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.129 17:46:42 -- common/autotest_common.sh@862 -- # return 0 00:03:56.129 17:46:42 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:03:56.129 17:46:42 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:03:56.129 17:46:42 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:59.410 nvme0n1 00:03:59.410 17:46:45 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:59.668 [2024-07-24 17:46:45.687421] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:59.668 [2024-07-24 17:46:45.687468] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:59.668 request: 00:03:59.668 { 00:03:59.668 "nvme_ctrlr_name": "nvme0", 00:03:59.668 "password": "test", 00:03:59.668 "method": "bdev_nvme_opal_revert", 00:03:59.668 "req_id": 1 00:03:59.668 } 00:03:59.668 Got JSON-RPC error response 00:03:59.668 response: 00:03:59.668 { 00:03:59.668 "code": -32603, 00:03:59.668 "message": "Internal error" 00:03:59.668 } 00:03:59.668 17:46:45 -- common/autotest_common.sh@1602 -- # true 00:03:59.668 17:46:45 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:03:59.668 17:46:45 -- common/autotest_common.sh@1606 -- # killprocess 2655849 00:03:59.668 17:46:45 -- common/autotest_common.sh@948 -- # '[' -z 2655849 ']' 00:03:59.668 17:46:45 -- common/autotest_common.sh@952 -- # kill -0 2655849 00:03:59.668 17:46:45 -- common/autotest_common.sh@953 -- # uname 00:03:59.668 17:46:45 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:59.668 17:46:45 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2655849 00:03:59.668 17:46:45 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:59.668 17:46:45 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:59.668 17:46:45 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2655849' 00:03:59.668 killing process with pid 2655849 00:03:59.668 17:46:45 -- common/autotest_common.sh@967 -- # kill 2655849 00:03:59.668 17:46:45 -- common/autotest_common.sh@972 -- # wait 2655849 00:04:01.565 17:46:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:01.565 17:46:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:01.565 17:46:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:01.565 17:46:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:01.565 17:46:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:01.565 17:46:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.565 17:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:01.565 17:46:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:01.566 17:46:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:01.566 17:46:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.566 17:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.566 17:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:01.566 ************************************ 00:04:01.566 START TEST env 00:04:01.566 ************************************ 00:04:01.566 17:46:47 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:01.566 * Looking for test storage... 00:04:01.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:01.566 17:46:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:01.566 17:46:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.566 17:46:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.566 17:46:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.566 ************************************ 00:04:01.566 START TEST env_memory 00:04:01.566 ************************************ 00:04:01.566 17:46:47 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:01.566 00:04:01.566 00:04:01.566 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.566 http://cunit.sourceforge.net/ 00:04:01.566 00:04:01.566 00:04:01.566 Suite: memory 00:04:01.566 Test: alloc and free memory map ...[2024-07-24 17:46:47.607520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:01.566 passed 00:04:01.566 Test: mem map translation ...[2024-07-24 17:46:47.628181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:01.566 [2024-07-24 17:46:47.628205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:01.566 [2024-07-24 17:46:47.628261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:01.566 [2024-07-24 17:46:47.628274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:01.566 passed 00:04:01.566 Test: mem map registration ...[2024-07-24 17:46:47.671031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:01.566 [2024-07-24 17:46:47.671050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:01.566 passed 00:04:01.566 Test: mem map adjacent registrations ...passed 00:04:01.566 00:04:01.566 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.566 suites 1 1 n/a 0 0 00:04:01.566 tests 4 4 4 0 0 00:04:01.566 asserts 152 152 152 0 n/a 00:04:01.566 00:04:01.566 Elapsed time = 0.146 seconds 00:04:01.566 00:04:01.566 real 0m0.153s 00:04:01.566 user 0m0.146s 00:04:01.566 sys 0m0.006s 00:04:01.566 17:46:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.566 17:46:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:01.566 ************************************ 00:04:01.566 END TEST env_memory 00:04:01.566 ************************************ 00:04:01.566 17:46:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.566 17:46:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.566 17:46:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.566 17:46:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.566 ************************************ 00:04:01.566 START TEST env_vtophys 00:04:01.566 ************************************ 00:04:01.566 17:46:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.566 EAL: lib.eal log level changed from notice to debug 00:04:01.566 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.566 EAL: Detected lcore 1 as core 1 on socket 0 00:04:01.566 EAL: Detected lcore 2 as core 2 on socket 0 00:04:01.566 EAL: Detected lcore 3 as core 3 on socket 0 00:04:01.566 EAL: Detected lcore 4 as core 4 on socket 0 00:04:01.566 EAL: Detected lcore 5 as core 5 on socket 0 00:04:01.566 EAL: Detected lcore 6 as core 8 on socket 0 00:04:01.566 EAL: Detected lcore 7 as core 9 on socket 0 00:04:01.566 EAL: Detected lcore 8 as core 10 on socket 0 00:04:01.566 EAL: Detected lcore 9 as core 11 on socket 0 00:04:01.566 EAL: Detected lcore 10 as core 12 on socket 0 00:04:01.566 EAL: Detected lcore 11 as core 13 on socket 0 00:04:01.566 EAL: Detected lcore 12 as core 0 on socket 1 00:04:01.566 EAL: Detected lcore 13 as core 1 on socket 1 00:04:01.566 EAL: Detected lcore 14 as core 2 on socket 1 00:04:01.566 EAL: Detected lcore 15 as core 3 on socket 1 00:04:01.566 EAL: Detected lcore 16 as core 4 on socket 1 00:04:01.566 EAL: Detected lcore 17 as core 5 on socket 1 00:04:01.566 EAL: Detected lcore 18 as core 8 on socket 1 00:04:01.566 EAL: Detected lcore 19 as core 9 on socket 1 00:04:01.566 EAL: Detected lcore 20 as core 10 on socket 1 00:04:01.566 EAL: Detected lcore 21 as core 11 on socket 1 00:04:01.566 EAL: Detected lcore 22 as core 12 on socket 1 00:04:01.566 EAL: Detected lcore 23 as core 13 on socket 1 00:04:01.566 EAL: Detected lcore 24 as core 0 on socket 0 00:04:01.566 EAL: Detected lcore 25 as core 1 on socket 0 00:04:01.566 EAL: Detected lcore 26 as core 2 on socket 0 00:04:01.566 EAL: Detected lcore 27 as core 3 on socket 0 00:04:01.566 EAL: Detected lcore 28 as core 4 on socket 0 00:04:01.566 EAL: Detected lcore 29 as core 5 on socket 0 00:04:01.566 EAL: Detected lcore 30 as core 8 on socket 0 00:04:01.566 EAL: Detected lcore 31 as core 9 on socket 0 00:04:01.566 EAL: Detected lcore 32 as core 10 on socket 0 00:04:01.566 EAL: Detected lcore 33 as core 11 on socket 0 00:04:01.566 EAL: Detected lcore 34 as core 12 on socket 0 00:04:01.566 EAL: Detected lcore 35 as core 13 on socket 0 00:04:01.566 EAL: Detected lcore 36 as core 0 on socket 1 00:04:01.566 EAL: Detected lcore 37 as core 1 on socket 1 00:04:01.566 EAL: Detected lcore 38 as core 2 on socket 1 00:04:01.566 EAL: Detected lcore 39 as core 3 on socket 1 00:04:01.566 EAL: Detected lcore 40 as core 4 on socket 1 00:04:01.566 EAL: Detected lcore 41 as core 5 on socket 1 00:04:01.566 EAL: Detected lcore 42 as core 8 on socket 1 00:04:01.566 EAL: Detected lcore 43 as core 9 on socket 1 00:04:01.566 EAL: Detected lcore 44 as core 10 on socket 1 00:04:01.566 EAL: Detected lcore 45 as core 11 on socket 1 00:04:01.566 EAL: Detected lcore 46 as core 12 on socket 1 00:04:01.566 EAL: Detected lcore 47 as core 13 on socket 1 00:04:01.566 EAL: Maximum logical cores by configuration: 128 00:04:01.566 EAL: Detected CPU lcores: 48 00:04:01.566 EAL: Detected NUMA nodes: 2 00:04:01.566 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:01.566 EAL: Detected shared linkage of DPDK 00:04:01.566 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.566 EAL: Bus pci wants IOVA as 'DC' 00:04:01.566 EAL: Buses did not request a specific IOVA mode. 00:04:01.566 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:01.566 EAL: Selected IOVA mode 'VA' 00:04:01.566 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.566 EAL: Probing VFIO support... 00:04:01.566 EAL: IOMMU type 1 (Type 1) is supported 00:04:01.566 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:01.566 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:01.566 EAL: VFIO support initialized 00:04:01.566 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.566 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.566 EAL: Setting up physically contiguous memory... 00:04:01.566 EAL: Setting maximum number of open files to 524288 00:04:01.566 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.566 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:01.566 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.566 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.566 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.566 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.566 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.566 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.566 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.566 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.566 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.566 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.566 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.566 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.566 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.566 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.566 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.567 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.567 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:01.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.567 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:01.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:01.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.567 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:01.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:01.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.567 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:01.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:01.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.567 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:01.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.567 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:01.567 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:01.567 EAL: Hugepages will be freed exactly as allocated. 00:04:01.567 EAL: No shared files mode enabled, IPC is disabled 00:04:01.567 EAL: No shared files mode enabled, IPC is disabled 00:04:01.567 EAL: TSC frequency is ~2700000 KHz 00:04:01.567 EAL: Main lcore 0 is ready (tid=7f4535364a00;cpuset=[0]) 00:04:01.567 EAL: Trying to obtain current memory policy. 00:04:01.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.567 EAL: Restoring previous memory policy: 0 00:04:01.567 EAL: request: mp_malloc_sync 00:04:01.567 EAL: No shared files mode enabled, IPC is disabled 00:04:01.567 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.567 EAL: No shared files mode enabled, IPC is disabled 00:04:01.567 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.567 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.824 00:04:01.824 00:04:01.824 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.824 http://cunit.sourceforge.net/ 00:04:01.824 00:04:01.824 00:04:01.824 Suite: components_suite 00:04:01.824 Test: vtophys_malloc_test ...passed 00:04:01.824 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 4MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 4MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 6MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 6MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 10MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 10MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 18MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 18MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 34MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 34MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 66MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 66MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 130MB 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was shrunk by 130MB 00:04:01.824 EAL: Trying to obtain current memory policy. 00:04:01.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.824 EAL: Restoring previous memory policy: 4 00:04:01.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.824 EAL: request: mp_malloc_sync 00:04:01.824 EAL: No shared files mode enabled, IPC is disabled 00:04:01.824 EAL: Heap on socket 0 was expanded by 258MB 00:04:02.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.081 EAL: request: mp_malloc_sync 00:04:02.081 EAL: No shared files mode enabled, IPC is disabled 00:04:02.081 EAL: Heap on socket 0 was shrunk by 258MB 00:04:02.081 EAL: Trying to obtain current memory policy. 00:04:02.081 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.081 EAL: Restoring previous memory policy: 4 00:04:02.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.081 EAL: request: mp_malloc_sync 00:04:02.081 EAL: No shared files mode enabled, IPC is disabled 00:04:02.081 EAL: Heap on socket 0 was expanded by 514MB 00:04:02.339 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.339 EAL: request: mp_malloc_sync 00:04:02.339 EAL: No shared files mode enabled, IPC is disabled 00:04:02.339 EAL: Heap on socket 0 was shrunk by 514MB 00:04:02.339 EAL: Trying to obtain current memory policy. 00:04:02.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.597 EAL: Restoring previous memory policy: 4 00:04:02.597 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.597 EAL: request: mp_malloc_sync 00:04:02.597 EAL: No shared files mode enabled, IPC is disabled 00:04:02.597 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.111 EAL: request: mp_malloc_sync 00:04:03.111 EAL: No shared files mode enabled, IPC is disabled 00:04:03.111 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:03.111 passed 00:04:03.111 00:04:03.111 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.111 suites 1 1 n/a 0 0 00:04:03.111 tests 2 2 2 0 0 00:04:03.111 asserts 497 497 497 0 n/a 00:04:03.111 00:04:03.111 Elapsed time = 1.419 seconds 00:04:03.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.111 EAL: request: mp_malloc_sync 00:04:03.111 EAL: No shared files mode enabled, IPC is disabled 00:04:03.111 EAL: Heap on socket 0 was shrunk by 2MB 00:04:03.111 EAL: No shared files mode enabled, IPC is disabled 00:04:03.111 EAL: No shared files mode enabled, IPC is disabled 00:04:03.111 EAL: No shared files mode enabled, IPC is disabled 00:04:03.111 00:04:03.111 real 0m1.533s 00:04:03.111 user 0m0.883s 00:04:03.111 sys 0m0.620s 00:04:03.111 17:46:49 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.111 17:46:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:03.111 ************************************ 00:04:03.111 END TEST env_vtophys 00:04:03.111 ************************************ 00:04:03.111 17:46:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.111 17:46:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.111 17:46:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.112 17:46:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.112 ************************************ 00:04:03.112 START TEST env_pci 00:04:03.112 ************************************ 00:04:03.112 17:46:49 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.112 00:04:03.112 00:04:03.112 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.112 http://cunit.sourceforge.net/ 00:04:03.112 00:04:03.112 00:04:03.112 Suite: pci 00:04:03.112 Test: pci_hook ...[2024-07-24 17:46:49.361005] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2656740 has claimed it 00:04:03.369 EAL: Cannot find device (10000:00:01.0) 00:04:03.369 EAL: Failed to attach device on primary process 00:04:03.369 passed 00:04:03.369 00:04:03.369 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.369 suites 1 1 n/a 0 0 00:04:03.369 tests 1 1 1 0 0 00:04:03.369 asserts 25 25 25 0 n/a 00:04:03.369 00:04:03.369 Elapsed time = 0.022 seconds 00:04:03.369 00:04:03.369 real 0m0.036s 00:04:03.369 user 0m0.010s 00:04:03.369 sys 0m0.025s 00:04:03.369 17:46:49 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.369 17:46:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:03.369 ************************************ 00:04:03.369 END TEST env_pci 00:04:03.369 ************************************ 00:04:03.369 17:46:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:03.369 17:46:49 env -- env/env.sh@15 -- # uname 00:04:03.369 17:46:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:03.369 17:46:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:03.369 17:46:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.369 17:46:49 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:03.369 17:46:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.369 17:46:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.369 ************************************ 00:04:03.369 START TEST env_dpdk_post_init 00:04:03.369 ************************************ 00:04:03.369 17:46:49 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.369 EAL: Detected CPU lcores: 48 00:04:03.369 EAL: Detected NUMA nodes: 2 00:04:03.369 EAL: Detected shared linkage of DPDK 00:04:03.369 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.369 EAL: Selected IOVA mode 'VA' 00:04:03.369 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.369 EAL: VFIO support initialized 00:04:03.369 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.369 EAL: Using IOMMU type 1 (Type 1) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:03.369 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:04.304 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:04.304 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:07.584 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:07.584 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:07.584 Starting DPDK initialization... 00:04:07.584 Starting SPDK post initialization... 00:04:07.584 SPDK NVMe probe 00:04:07.584 Attaching to 0000:0b:00.0 00:04:07.584 Attached to 0000:0b:00.0 00:04:07.584 Cleaning up... 00:04:07.584 00:04:07.584 real 0m4.361s 00:04:07.584 user 0m3.212s 00:04:07.584 sys 0m0.206s 00:04:07.584 17:46:53 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.584 17:46:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.584 ************************************ 00:04:07.584 END TEST env_dpdk_post_init 00:04:07.584 ************************************ 00:04:07.584 17:46:53 env -- env/env.sh@26 -- # uname 00:04:07.584 17:46:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.584 17:46:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.584 17:46:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.584 17:46:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.584 17:46:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.584 ************************************ 00:04:07.584 START TEST env_mem_callbacks 00:04:07.584 ************************************ 00:04:07.584 17:46:53 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.842 EAL: Detected CPU lcores: 48 00:04:07.842 EAL: Detected NUMA nodes: 2 00:04:07.842 EAL: Detected shared linkage of DPDK 00:04:07.842 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.842 EAL: Selected IOVA mode 'VA' 00:04:07.842 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.842 EAL: VFIO support initialized 00:04:07.842 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.842 00:04:07.842 00:04:07.842 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.842 http://cunit.sourceforge.net/ 00:04:07.842 00:04:07.842 00:04:07.842 Suite: memory 00:04:07.842 Test: test ... 00:04:07.842 register 0x200000200000 2097152 00:04:07.842 malloc 3145728 00:04:07.842 register 0x200000400000 4194304 00:04:07.842 buf 0x200000500000 len 3145728 PASSED 00:04:07.842 malloc 64 00:04:07.842 buf 0x2000004fff40 len 64 PASSED 00:04:07.842 malloc 4194304 00:04:07.842 register 0x200000800000 6291456 00:04:07.842 buf 0x200000a00000 len 4194304 PASSED 00:04:07.842 free 0x200000500000 3145728 00:04:07.842 free 0x2000004fff40 64 00:04:07.842 unregister 0x200000400000 4194304 PASSED 00:04:07.842 free 0x200000a00000 4194304 00:04:07.842 unregister 0x200000800000 6291456 PASSED 00:04:07.842 malloc 8388608 00:04:07.843 register 0x200000400000 10485760 00:04:07.843 buf 0x200000600000 len 8388608 PASSED 00:04:07.843 free 0x200000600000 8388608 00:04:07.843 unregister 0x200000400000 10485760 PASSED 00:04:07.843 passed 00:04:07.843 00:04:07.843 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.843 suites 1 1 n/a 0 0 00:04:07.843 tests 1 1 1 0 0 00:04:07.843 asserts 15 15 15 0 n/a 00:04:07.843 00:04:07.843 Elapsed time = 0.005 seconds 00:04:07.843 00:04:07.843 real 0m0.049s 00:04:07.843 user 0m0.011s 00:04:07.843 sys 0m0.037s 00:04:07.843 17:46:53 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.843 17:46:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.843 ************************************ 00:04:07.843 END TEST env_mem_callbacks 00:04:07.843 ************************************ 00:04:07.843 00:04:07.843 real 0m6.418s 00:04:07.843 user 0m4.385s 00:04:07.843 sys 0m1.078s 00:04:07.843 17:46:53 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.843 17:46:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.843 ************************************ 00:04:07.843 END TEST env 00:04:07.843 ************************************ 00:04:07.843 17:46:53 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.843 17:46:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.843 17:46:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.843 17:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:07.843 ************************************ 00:04:07.843 START TEST rpc 00:04:07.843 ************************************ 00:04:07.843 17:46:53 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.843 * Looking for test storage... 00:04:07.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.843 17:46:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2657402 00:04:07.843 17:46:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:07.843 17:46:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.843 17:46:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2657402 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@829 -- # '[' -z 2657402 ']' 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.843 17:46:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.843 [2024-07-24 17:46:54.060540] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:07.843 [2024-07-24 17:46:54.060633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657402 ] 00:04:07.843 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.101 [2024-07-24 17:46:54.116286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.101 [2024-07-24 17:46:54.228573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.101 [2024-07-24 17:46:54.228619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2657402' to capture a snapshot of events at runtime. 00:04:08.101 [2024-07-24 17:46:54.228648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.101 [2024-07-24 17:46:54.228658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.101 [2024-07-24 17:46:54.228668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2657402 for offline analysis/debug. 00:04:08.101 [2024-07-24 17:46:54.228693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.358 17:46:54 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.358 17:46:54 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:08.358 17:46:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.358 17:46:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.358 17:46:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.358 17:46:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.358 17:46:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.358 17:46:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.358 17:46:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.358 ************************************ 00:04:08.358 START TEST rpc_integrity 00:04:08.358 ************************************ 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.358 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.358 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.358 { 00:04:08.358 "name": "Malloc0", 00:04:08.358 "aliases": [ 00:04:08.358 "f01ad22c-69b5-4f1f-89a0-7ee5ec859fbe" 00:04:08.358 ], 00:04:08.358 "product_name": "Malloc disk", 00:04:08.358 "block_size": 512, 00:04:08.358 "num_blocks": 16384, 00:04:08.358 "uuid": "f01ad22c-69b5-4f1f-89a0-7ee5ec859fbe", 00:04:08.358 "assigned_rate_limits": { 00:04:08.358 "rw_ios_per_sec": 0, 00:04:08.358 "rw_mbytes_per_sec": 0, 00:04:08.358 "r_mbytes_per_sec": 0, 00:04:08.358 "w_mbytes_per_sec": 0 00:04:08.358 }, 00:04:08.358 "claimed": false, 00:04:08.358 "zoned": false, 00:04:08.358 "supported_io_types": { 00:04:08.358 "read": true, 00:04:08.358 "write": true, 00:04:08.358 "unmap": true, 00:04:08.358 "flush": true, 00:04:08.358 "reset": true, 00:04:08.358 "nvme_admin": false, 00:04:08.358 "nvme_io": false, 00:04:08.358 "nvme_io_md": false, 00:04:08.358 "write_zeroes": true, 00:04:08.358 "zcopy": true, 00:04:08.358 "get_zone_info": false, 00:04:08.358 "zone_management": false, 00:04:08.358 "zone_append": false, 00:04:08.358 "compare": false, 00:04:08.358 "compare_and_write": false, 00:04:08.358 "abort": true, 00:04:08.358 "seek_hole": false, 00:04:08.358 "seek_data": false, 00:04:08.358 "copy": true, 00:04:08.358 "nvme_iov_md": false 00:04:08.358 }, 00:04:08.358 "memory_domains": [ 00:04:08.358 { 00:04:08.358 "dma_device_id": "system", 00:04:08.358 "dma_device_type": 1 00:04:08.358 }, 00:04:08.358 { 00:04:08.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.359 "dma_device_type": 2 00:04:08.359 } 00:04:08.359 ], 00:04:08.359 "driver_specific": {} 00:04:08.359 } 00:04:08.359 ]' 00:04:08.359 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 [2024-07-24 17:46:54.636210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.617 [2024-07-24 17:46:54.636250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.617 [2024-07-24 17:46:54.636272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fadd50 00:04:08.617 [2024-07-24 17:46:54.636286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.617 [2024-07-24 17:46:54.637812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.617 [2024-07-24 17:46:54.637839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.617 Passthru0 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.617 { 00:04:08.617 "name": "Malloc0", 00:04:08.617 "aliases": [ 00:04:08.617 "f01ad22c-69b5-4f1f-89a0-7ee5ec859fbe" 00:04:08.617 ], 00:04:08.617 "product_name": "Malloc disk", 00:04:08.617 "block_size": 512, 00:04:08.617 "num_blocks": 16384, 00:04:08.617 "uuid": "f01ad22c-69b5-4f1f-89a0-7ee5ec859fbe", 00:04:08.617 "assigned_rate_limits": { 00:04:08.617 "rw_ios_per_sec": 0, 00:04:08.617 "rw_mbytes_per_sec": 0, 00:04:08.617 "r_mbytes_per_sec": 0, 00:04:08.617 "w_mbytes_per_sec": 0 00:04:08.617 }, 00:04:08.617 "claimed": true, 00:04:08.617 "claim_type": "exclusive_write", 00:04:08.617 "zoned": false, 00:04:08.617 "supported_io_types": { 00:04:08.617 "read": true, 00:04:08.617 "write": true, 00:04:08.617 "unmap": true, 00:04:08.617 "flush": true, 00:04:08.617 "reset": true, 00:04:08.617 "nvme_admin": false, 00:04:08.617 "nvme_io": false, 00:04:08.617 "nvme_io_md": false, 00:04:08.617 "write_zeroes": true, 00:04:08.617 "zcopy": true, 00:04:08.617 "get_zone_info": false, 00:04:08.617 "zone_management": false, 00:04:08.617 "zone_append": false, 00:04:08.617 "compare": false, 00:04:08.617 "compare_and_write": false, 00:04:08.617 "abort": true, 00:04:08.617 "seek_hole": false, 00:04:08.617 "seek_data": false, 00:04:08.617 "copy": true, 00:04:08.617 "nvme_iov_md": false 00:04:08.617 }, 00:04:08.617 "memory_domains": [ 00:04:08.617 { 00:04:08.617 "dma_device_id": "system", 00:04:08.617 "dma_device_type": 1 00:04:08.617 }, 00:04:08.617 { 00:04:08.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.617 "dma_device_type": 2 00:04:08.617 } 00:04:08.617 ], 00:04:08.617 "driver_specific": {} 00:04:08.617 }, 00:04:08.617 { 00:04:08.617 "name": "Passthru0", 00:04:08.617 "aliases": [ 00:04:08.617 "316fae34-80d7-59bd-9548-1a2b0c40802e" 00:04:08.617 ], 00:04:08.617 "product_name": "passthru", 00:04:08.617 "block_size": 512, 00:04:08.617 "num_blocks": 16384, 00:04:08.617 "uuid": "316fae34-80d7-59bd-9548-1a2b0c40802e", 00:04:08.617 "assigned_rate_limits": { 00:04:08.617 "rw_ios_per_sec": 0, 00:04:08.617 "rw_mbytes_per_sec": 0, 00:04:08.617 "r_mbytes_per_sec": 0, 00:04:08.617 "w_mbytes_per_sec": 0 00:04:08.617 }, 00:04:08.617 "claimed": false, 00:04:08.617 "zoned": false, 00:04:08.617 "supported_io_types": { 00:04:08.617 "read": true, 00:04:08.617 "write": true, 00:04:08.617 "unmap": true, 00:04:08.617 "flush": true, 00:04:08.617 "reset": true, 00:04:08.617 "nvme_admin": false, 00:04:08.617 "nvme_io": false, 00:04:08.617 "nvme_io_md": false, 00:04:08.617 "write_zeroes": true, 00:04:08.617 "zcopy": true, 00:04:08.617 "get_zone_info": false, 00:04:08.617 "zone_management": false, 00:04:08.617 "zone_append": false, 00:04:08.617 "compare": false, 00:04:08.617 "compare_and_write": false, 00:04:08.617 "abort": true, 00:04:08.617 "seek_hole": false, 00:04:08.617 "seek_data": false, 00:04:08.617 "copy": true, 00:04:08.617 "nvme_iov_md": false 00:04:08.617 }, 00:04:08.617 "memory_domains": [ 00:04:08.617 { 00:04:08.617 "dma_device_id": "system", 00:04:08.617 "dma_device_type": 1 00:04:08.617 }, 00:04:08.617 { 00:04:08.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.617 "dma_device_type": 2 00:04:08.617 } 00:04:08.617 ], 00:04:08.617 "driver_specific": { 00:04:08.617 "passthru": { 00:04:08.617 "name": "Passthru0", 00:04:08.617 "base_bdev_name": "Malloc0" 00:04:08.617 } 00:04:08.617 } 00:04:08.617 } 00:04:08.617 ]' 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.617 17:46:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.617 00:04:08.617 real 0m0.229s 00:04:08.617 user 0m0.151s 00:04:08.617 sys 0m0.021s 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 ************************************ 00:04:08.617 END TEST rpc_integrity 00:04:08.617 ************************************ 00:04:08.617 17:46:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.617 17:46:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.617 17:46:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.617 17:46:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 ************************************ 00:04:08.617 START TEST rpc_plugins 00:04:08.617 ************************************ 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.617 { 00:04:08.617 "name": "Malloc1", 00:04:08.617 "aliases": [ 00:04:08.617 "93c35ff7-5ab1-42de-9d0e-cdb94e3849de" 00:04:08.617 ], 00:04:08.617 "product_name": "Malloc disk", 00:04:08.617 "block_size": 4096, 00:04:08.617 "num_blocks": 256, 00:04:08.617 "uuid": "93c35ff7-5ab1-42de-9d0e-cdb94e3849de", 00:04:08.617 "assigned_rate_limits": { 00:04:08.617 "rw_ios_per_sec": 0, 00:04:08.617 "rw_mbytes_per_sec": 0, 00:04:08.617 "r_mbytes_per_sec": 0, 00:04:08.617 "w_mbytes_per_sec": 0 00:04:08.617 }, 00:04:08.617 "claimed": false, 00:04:08.617 "zoned": false, 00:04:08.617 "supported_io_types": { 00:04:08.617 "read": true, 00:04:08.617 "write": true, 00:04:08.617 "unmap": true, 00:04:08.617 "flush": true, 00:04:08.617 "reset": true, 00:04:08.617 "nvme_admin": false, 00:04:08.617 "nvme_io": false, 00:04:08.617 "nvme_io_md": false, 00:04:08.617 "write_zeroes": true, 00:04:08.617 "zcopy": true, 00:04:08.617 "get_zone_info": false, 00:04:08.617 "zone_management": false, 00:04:08.617 "zone_append": false, 00:04:08.617 "compare": false, 00:04:08.617 "compare_and_write": false, 00:04:08.617 "abort": true, 00:04:08.617 "seek_hole": false, 00:04:08.617 "seek_data": false, 00:04:08.617 "copy": true, 00:04:08.617 "nvme_iov_md": false 00:04:08.617 }, 00:04:08.617 "memory_domains": [ 00:04:08.617 { 00:04:08.617 "dma_device_id": "system", 00:04:08.617 "dma_device_type": 1 00:04:08.617 }, 00:04:08.617 { 00:04:08.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.617 "dma_device_type": 2 00:04:08.617 } 00:04:08.617 ], 00:04:08.617 "driver_specific": {} 00:04:08.617 } 00:04:08.617 ]' 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.617 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.617 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.875 17:46:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.875 00:04:08.875 real 0m0.120s 00:04:08.875 user 0m0.079s 00:04:08.875 sys 0m0.010s 00:04:08.875 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.875 17:46:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.875 ************************************ 00:04:08.875 END TEST rpc_plugins 00:04:08.875 ************************************ 00:04:08.875 17:46:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.875 17:46:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.875 17:46:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.875 17:46:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.875 ************************************ 00:04:08.875 START TEST rpc_trace_cmd_test 00:04:08.875 ************************************ 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.875 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2657402", 00:04:08.875 "tpoint_group_mask": "0x8", 00:04:08.875 "iscsi_conn": { 00:04:08.875 "mask": "0x2", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "scsi": { 00:04:08.875 "mask": "0x4", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "bdev": { 00:04:08.875 "mask": "0x8", 00:04:08.875 "tpoint_mask": "0xffffffffffffffff" 00:04:08.875 }, 00:04:08.875 "nvmf_rdma": { 00:04:08.875 "mask": "0x10", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "nvmf_tcp": { 00:04:08.875 "mask": "0x20", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "ftl": { 00:04:08.875 "mask": "0x40", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "blobfs": { 00:04:08.875 "mask": "0x80", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "dsa": { 00:04:08.875 "mask": "0x200", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "thread": { 00:04:08.875 "mask": "0x400", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "nvme_pcie": { 00:04:08.875 "mask": "0x800", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "iaa": { 00:04:08.875 "mask": "0x1000", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "nvme_tcp": { 00:04:08.875 "mask": "0x2000", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "bdev_nvme": { 00:04:08.875 "mask": "0x4000", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 }, 00:04:08.875 "sock": { 00:04:08.875 "mask": "0x8000", 00:04:08.875 "tpoint_mask": "0x0" 00:04:08.875 } 00:04:08.875 }' 00:04:08.875 17:46:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.875 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.876 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.134 17:46:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.134 00:04:09.134 real 0m0.200s 00:04:09.134 user 0m0.178s 00:04:09.134 sys 0m0.014s 00:04:09.134 17:46:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 ************************************ 00:04:09.134 END TEST rpc_trace_cmd_test 00:04:09.134 ************************************ 00:04:09.134 17:46:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.134 17:46:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.134 17:46:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.134 17:46:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.134 17:46:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.134 17:46:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 ************************************ 00:04:09.134 START TEST rpc_daemon_integrity 00:04:09.134 ************************************ 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.134 { 00:04:09.134 "name": "Malloc2", 00:04:09.134 "aliases": [ 00:04:09.134 "64903634-bb2c-4b21-b6ed-16f89fe1cd5c" 00:04:09.134 ], 00:04:09.134 "product_name": "Malloc disk", 00:04:09.134 "block_size": 512, 00:04:09.134 "num_blocks": 16384, 00:04:09.134 "uuid": "64903634-bb2c-4b21-b6ed-16f89fe1cd5c", 00:04:09.134 "assigned_rate_limits": { 00:04:09.134 "rw_ios_per_sec": 0, 00:04:09.134 "rw_mbytes_per_sec": 0, 00:04:09.134 "r_mbytes_per_sec": 0, 00:04:09.134 "w_mbytes_per_sec": 0 00:04:09.134 }, 00:04:09.134 "claimed": false, 00:04:09.134 "zoned": false, 00:04:09.134 "supported_io_types": { 00:04:09.134 "read": true, 00:04:09.134 "write": true, 00:04:09.134 "unmap": true, 00:04:09.134 "flush": true, 00:04:09.134 "reset": true, 00:04:09.134 "nvme_admin": false, 00:04:09.134 "nvme_io": false, 00:04:09.134 "nvme_io_md": false, 00:04:09.134 "write_zeroes": true, 00:04:09.134 "zcopy": true, 00:04:09.134 "get_zone_info": false, 00:04:09.134 "zone_management": false, 00:04:09.134 "zone_append": false, 00:04:09.134 "compare": false, 00:04:09.134 "compare_and_write": false, 00:04:09.134 "abort": true, 00:04:09.134 "seek_hole": false, 00:04:09.134 "seek_data": false, 00:04:09.134 "copy": true, 00:04:09.134 "nvme_iov_md": false 00:04:09.134 }, 00:04:09.134 "memory_domains": [ 00:04:09.134 { 00:04:09.134 "dma_device_id": "system", 00:04:09.134 "dma_device_type": 1 00:04:09.134 }, 00:04:09.134 { 00:04:09.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.134 "dma_device_type": 2 00:04:09.134 } 00:04:09.134 ], 00:04:09.134 "driver_specific": {} 00:04:09.134 } 00:04:09.134 ]' 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 [2024-07-24 17:46:55.322201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.134 [2024-07-24 17:46:55.322247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.134 [2024-07-24 17:46:55.322270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1faec00 00:04:09.134 [2024-07-24 17:46:55.322283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.134 [2024-07-24 17:46:55.323633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.134 [2024-07-24 17:46:55.323661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.134 Passthru0 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.134 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.134 { 00:04:09.134 "name": "Malloc2", 00:04:09.134 "aliases": [ 00:04:09.134 "64903634-bb2c-4b21-b6ed-16f89fe1cd5c" 00:04:09.134 ], 00:04:09.134 "product_name": "Malloc disk", 00:04:09.134 "block_size": 512, 00:04:09.134 "num_blocks": 16384, 00:04:09.134 "uuid": "64903634-bb2c-4b21-b6ed-16f89fe1cd5c", 00:04:09.134 "assigned_rate_limits": { 00:04:09.134 "rw_ios_per_sec": 0, 00:04:09.134 "rw_mbytes_per_sec": 0, 00:04:09.134 "r_mbytes_per_sec": 0, 00:04:09.134 "w_mbytes_per_sec": 0 00:04:09.134 }, 00:04:09.134 "claimed": true, 00:04:09.134 "claim_type": "exclusive_write", 00:04:09.134 "zoned": false, 00:04:09.134 "supported_io_types": { 00:04:09.134 "read": true, 00:04:09.134 "write": true, 00:04:09.134 "unmap": true, 00:04:09.134 "flush": true, 00:04:09.134 "reset": true, 00:04:09.134 "nvme_admin": false, 00:04:09.134 "nvme_io": false, 00:04:09.134 "nvme_io_md": false, 00:04:09.134 "write_zeroes": true, 00:04:09.134 "zcopy": true, 00:04:09.134 "get_zone_info": false, 00:04:09.134 "zone_management": false, 00:04:09.134 "zone_append": false, 00:04:09.134 "compare": false, 00:04:09.134 "compare_and_write": false, 00:04:09.134 "abort": true, 00:04:09.134 "seek_hole": false, 00:04:09.134 "seek_data": false, 00:04:09.134 "copy": true, 00:04:09.134 "nvme_iov_md": false 00:04:09.134 }, 00:04:09.134 "memory_domains": [ 00:04:09.134 { 00:04:09.134 "dma_device_id": "system", 00:04:09.134 "dma_device_type": 1 00:04:09.134 }, 00:04:09.134 { 00:04:09.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.134 "dma_device_type": 2 00:04:09.134 } 00:04:09.134 ], 00:04:09.134 "driver_specific": {} 00:04:09.134 }, 00:04:09.134 { 00:04:09.134 "name": "Passthru0", 00:04:09.134 "aliases": [ 00:04:09.134 "c28abcf0-68d5-5132-811a-781fba480267" 00:04:09.134 ], 00:04:09.134 "product_name": "passthru", 00:04:09.134 "block_size": 512, 00:04:09.134 "num_blocks": 16384, 00:04:09.134 "uuid": "c28abcf0-68d5-5132-811a-781fba480267", 00:04:09.134 "assigned_rate_limits": { 00:04:09.134 "rw_ios_per_sec": 0, 00:04:09.134 "rw_mbytes_per_sec": 0, 00:04:09.134 "r_mbytes_per_sec": 0, 00:04:09.134 "w_mbytes_per_sec": 0 00:04:09.134 }, 00:04:09.134 "claimed": false, 00:04:09.134 "zoned": false, 00:04:09.134 "supported_io_types": { 00:04:09.134 "read": true, 00:04:09.134 "write": true, 00:04:09.134 "unmap": true, 00:04:09.134 "flush": true, 00:04:09.134 "reset": true, 00:04:09.134 "nvme_admin": false, 00:04:09.134 "nvme_io": false, 00:04:09.134 "nvme_io_md": false, 00:04:09.134 "write_zeroes": true, 00:04:09.134 "zcopy": true, 00:04:09.134 "get_zone_info": false, 00:04:09.134 "zone_management": false, 00:04:09.134 "zone_append": false, 00:04:09.134 "compare": false, 00:04:09.134 "compare_and_write": false, 00:04:09.134 "abort": true, 00:04:09.135 "seek_hole": false, 00:04:09.135 "seek_data": false, 00:04:09.135 "copy": true, 00:04:09.135 "nvme_iov_md": false 00:04:09.135 }, 00:04:09.135 "memory_domains": [ 00:04:09.135 { 00:04:09.135 "dma_device_id": "system", 00:04:09.135 "dma_device_type": 1 00:04:09.135 }, 00:04:09.135 { 00:04:09.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.135 "dma_device_type": 2 00:04:09.135 } 00:04:09.135 ], 00:04:09.135 "driver_specific": { 00:04:09.135 "passthru": { 00:04:09.135 "name": "Passthru0", 00:04:09.135 "base_bdev_name": "Malloc2" 00:04:09.135 } 00:04:09.135 } 00:04:09.135 } 00:04:09.135 ]' 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.135 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.393 00:04:09.393 real 0m0.229s 00:04:09.393 user 0m0.153s 00:04:09.393 sys 0m0.021s 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.393 17:46:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.393 ************************************ 00:04:09.393 END TEST rpc_daemon_integrity 00:04:09.393 ************************************ 00:04:09.393 17:46:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.393 17:46:55 rpc -- rpc/rpc.sh@84 -- # killprocess 2657402 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@948 -- # '[' -z 2657402 ']' 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@952 -- # kill -0 2657402 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@953 -- # uname 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2657402 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2657402' 00:04:09.393 killing process with pid 2657402 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@967 -- # kill 2657402 00:04:09.393 17:46:55 rpc -- common/autotest_common.sh@972 -- # wait 2657402 00:04:09.958 00:04:09.958 real 0m1.991s 00:04:09.958 user 0m2.479s 00:04:09.958 sys 0m0.600s 00:04:09.958 17:46:55 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.958 17:46:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.958 ************************************ 00:04:09.958 END TEST rpc 00:04:09.958 ************************************ 00:04:09.958 17:46:55 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.958 17:46:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.958 17:46:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.958 17:46:55 -- common/autotest_common.sh@10 -- # set +x 00:04:09.958 ************************************ 00:04:09.958 START TEST skip_rpc 00:04:09.958 ************************************ 00:04:09.958 17:46:56 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.958 * Looking for test storage... 00:04:09.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:09.958 17:46:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:09.958 17:46:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.958 17:46:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:09.958 17:46:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.958 17:46:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.958 17:46:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.958 ************************************ 00:04:09.958 START TEST skip_rpc 00:04:09.958 ************************************ 00:04:09.958 17:46:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:09.958 17:46:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2657835 00:04:09.958 17:46:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:09.958 17:46:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.958 17:46:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:09.958 [2024-07-24 17:46:56.127563] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:09.958 [2024-07-24 17:46:56.127639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657835 ] 00:04:09.958 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.958 [2024-07-24 17:46:56.183044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.216 [2024-07-24 17:46:56.296697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2657835 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2657835 ']' 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2657835 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2657835 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2657835' 00:04:15.567 killing process with pid 2657835 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2657835 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2657835 00:04:15.567 00:04:15.567 real 0m5.509s 00:04:15.567 user 0m5.186s 00:04:15.567 sys 0m0.320s 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.567 17:47:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.567 ************************************ 00:04:15.567 END TEST skip_rpc 00:04:15.567 ************************************ 00:04:15.567 17:47:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.567 17:47:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.567 17:47:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.567 17:47:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.567 ************************************ 00:04:15.567 START TEST skip_rpc_with_json 00:04:15.567 ************************************ 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2658524 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2658524 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2658524 ']' 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.567 17:47:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.567 [2024-07-24 17:47:01.682555] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:15.567 [2024-07-24 17:47:01.682642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658524 ] 00:04:15.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.567 [2024-07-24 17:47:01.743434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.825 [2024-07-24 17:47:01.866690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.082 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.082 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:16.082 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.083 [2024-07-24 17:47:02.137872] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.083 request: 00:04:16.083 { 00:04:16.083 "trtype": "tcp", 00:04:16.083 "method": "nvmf_get_transports", 00:04:16.083 "req_id": 1 00:04:16.083 } 00:04:16.083 Got JSON-RPC error response 00:04:16.083 response: 00:04:16.083 { 00:04:16.083 "code": -19, 00:04:16.083 "message": "No such device" 00:04:16.083 } 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.083 [2024-07-24 17:47:02.145998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.083 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.083 { 00:04:16.083 "subsystems": [ 00:04:16.083 { 00:04:16.083 "subsystem": "vfio_user_target", 00:04:16.083 "config": null 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "keyring", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "iobuf", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "iobuf_set_options", 00:04:16.083 "params": { 00:04:16.083 "small_pool_count": 8192, 00:04:16.083 "large_pool_count": 1024, 00:04:16.083 "small_bufsize": 8192, 00:04:16.083 "large_bufsize": 135168 00:04:16.083 } 00:04:16.083 } 00:04:16.083 ] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "sock", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "sock_set_default_impl", 00:04:16.083 "params": { 00:04:16.083 "impl_name": "posix" 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "sock_impl_set_options", 00:04:16.083 "params": { 00:04:16.083 "impl_name": "ssl", 00:04:16.083 "recv_buf_size": 4096, 00:04:16.083 "send_buf_size": 4096, 00:04:16.083 "enable_recv_pipe": true, 00:04:16.083 "enable_quickack": false, 00:04:16.083 "enable_placement_id": 0, 00:04:16.083 "enable_zerocopy_send_server": true, 00:04:16.083 "enable_zerocopy_send_client": false, 00:04:16.083 "zerocopy_threshold": 0, 00:04:16.083 "tls_version": 0, 00:04:16.083 "enable_ktls": false 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "sock_impl_set_options", 00:04:16.083 "params": { 00:04:16.083 "impl_name": "posix", 00:04:16.083 "recv_buf_size": 2097152, 00:04:16.083 "send_buf_size": 2097152, 00:04:16.083 "enable_recv_pipe": true, 00:04:16.083 "enable_quickack": false, 00:04:16.083 "enable_placement_id": 0, 00:04:16.083 "enable_zerocopy_send_server": true, 00:04:16.083 "enable_zerocopy_send_client": false, 00:04:16.083 "zerocopy_threshold": 0, 00:04:16.083 "tls_version": 0, 00:04:16.083 "enable_ktls": false 00:04:16.083 } 00:04:16.083 } 00:04:16.083 ] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "vmd", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "accel", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "accel_set_options", 00:04:16.083 "params": { 00:04:16.083 "small_cache_size": 128, 00:04:16.083 "large_cache_size": 16, 00:04:16.083 "task_count": 2048, 00:04:16.083 "sequence_count": 2048, 00:04:16.083 "buf_count": 2048 00:04:16.083 } 00:04:16.083 } 00:04:16.083 ] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "bdev", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "bdev_set_options", 00:04:16.083 "params": { 00:04:16.083 "bdev_io_pool_size": 65535, 00:04:16.083 "bdev_io_cache_size": 256, 00:04:16.083 "bdev_auto_examine": true, 00:04:16.083 "iobuf_small_cache_size": 128, 00:04:16.083 "iobuf_large_cache_size": 16 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "bdev_raid_set_options", 00:04:16.083 "params": { 00:04:16.083 "process_window_size_kb": 1024, 00:04:16.083 "process_max_bandwidth_mb_sec": 0 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "bdev_iscsi_set_options", 00:04:16.083 "params": { 00:04:16.083 "timeout_sec": 30 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "bdev_nvme_set_options", 00:04:16.083 "params": { 00:04:16.083 "action_on_timeout": "none", 00:04:16.083 "timeout_us": 0, 00:04:16.083 "timeout_admin_us": 0, 00:04:16.083 "keep_alive_timeout_ms": 10000, 00:04:16.083 "arbitration_burst": 0, 00:04:16.083 "low_priority_weight": 0, 00:04:16.083 "medium_priority_weight": 0, 00:04:16.083 "high_priority_weight": 0, 00:04:16.083 "nvme_adminq_poll_period_us": 10000, 00:04:16.083 "nvme_ioq_poll_period_us": 0, 00:04:16.083 "io_queue_requests": 0, 00:04:16.083 "delay_cmd_submit": true, 00:04:16.083 "transport_retry_count": 4, 00:04:16.083 "bdev_retry_count": 3, 00:04:16.083 "transport_ack_timeout": 0, 00:04:16.083 "ctrlr_loss_timeout_sec": 0, 00:04:16.083 "reconnect_delay_sec": 0, 00:04:16.083 "fast_io_fail_timeout_sec": 0, 00:04:16.083 "disable_auto_failback": false, 00:04:16.083 "generate_uuids": false, 00:04:16.083 "transport_tos": 0, 00:04:16.083 "nvme_error_stat": false, 00:04:16.083 "rdma_srq_size": 0, 00:04:16.083 "io_path_stat": false, 00:04:16.083 "allow_accel_sequence": false, 00:04:16.083 "rdma_max_cq_size": 0, 00:04:16.083 "rdma_cm_event_timeout_ms": 0, 00:04:16.083 "dhchap_digests": [ 00:04:16.083 "sha256", 00:04:16.083 "sha384", 00:04:16.083 "sha512" 00:04:16.083 ], 00:04:16.083 "dhchap_dhgroups": [ 00:04:16.083 "null", 00:04:16.083 "ffdhe2048", 00:04:16.083 "ffdhe3072", 00:04:16.083 "ffdhe4096", 00:04:16.083 "ffdhe6144", 00:04:16.083 "ffdhe8192" 00:04:16.083 ] 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "bdev_nvme_set_hotplug", 00:04:16.083 "params": { 00:04:16.083 "period_us": 100000, 00:04:16.083 "enable": false 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "bdev_wait_for_examine" 00:04:16.083 } 00:04:16.083 ] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "scsi", 00:04:16.083 "config": null 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "scheduler", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "framework_set_scheduler", 00:04:16.083 "params": { 00:04:16.083 "name": "static" 00:04:16.083 } 00:04:16.083 } 00:04:16.083 ] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "vhost_scsi", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "vhost_blk", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "ublk", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "nbd", 00:04:16.083 "config": [] 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "subsystem": "nvmf", 00:04:16.083 "config": [ 00:04:16.083 { 00:04:16.083 "method": "nvmf_set_config", 00:04:16.083 "params": { 00:04:16.083 "discovery_filter": "match_any", 00:04:16.083 "admin_cmd_passthru": { 00:04:16.083 "identify_ctrlr": false 00:04:16.083 } 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "nvmf_set_max_subsystems", 00:04:16.083 "params": { 00:04:16.083 "max_subsystems": 1024 00:04:16.083 } 00:04:16.083 }, 00:04:16.083 { 00:04:16.083 "method": "nvmf_set_crdt", 00:04:16.083 "params": { 00:04:16.083 "crdt1": 0, 00:04:16.083 "crdt2": 0, 00:04:16.083 "crdt3": 0 00:04:16.083 } 00:04:16.084 }, 00:04:16.084 { 00:04:16.084 "method": "nvmf_create_transport", 00:04:16.084 "params": { 00:04:16.084 "trtype": "TCP", 00:04:16.084 "max_queue_depth": 128, 00:04:16.084 "max_io_qpairs_per_ctrlr": 127, 00:04:16.084 "in_capsule_data_size": 4096, 00:04:16.084 "max_io_size": 131072, 00:04:16.084 "io_unit_size": 131072, 00:04:16.084 "max_aq_depth": 128, 00:04:16.084 "num_shared_buffers": 511, 00:04:16.084 "buf_cache_size": 4294967295, 00:04:16.084 "dif_insert_or_strip": false, 00:04:16.084 "zcopy": false, 00:04:16.084 "c2h_success": true, 00:04:16.084 "sock_priority": 0, 00:04:16.084 "abort_timeout_sec": 1, 00:04:16.084 "ack_timeout": 0, 00:04:16.084 "data_wr_pool_size": 0 00:04:16.084 } 00:04:16.084 } 00:04:16.084 ] 00:04:16.084 }, 00:04:16.084 { 00:04:16.084 "subsystem": "iscsi", 00:04:16.084 "config": [ 00:04:16.084 { 00:04:16.084 "method": "iscsi_set_options", 00:04:16.084 "params": { 00:04:16.084 "node_base": "iqn.2016-06.io.spdk", 00:04:16.084 "max_sessions": 128, 00:04:16.084 "max_connections_per_session": 2, 00:04:16.084 "max_queue_depth": 64, 00:04:16.084 "default_time2wait": 2, 00:04:16.084 "default_time2retain": 20, 00:04:16.084 "first_burst_length": 8192, 00:04:16.084 "immediate_data": true, 00:04:16.084 "allow_duplicated_isid": false, 00:04:16.084 "error_recovery_level": 0, 00:04:16.084 "nop_timeout": 60, 00:04:16.084 "nop_in_interval": 30, 00:04:16.084 "disable_chap": false, 00:04:16.084 "require_chap": false, 00:04:16.084 "mutual_chap": false, 00:04:16.084 "chap_group": 0, 00:04:16.084 "max_large_datain_per_connection": 64, 00:04:16.084 "max_r2t_per_connection": 4, 00:04:16.084 "pdu_pool_size": 36864, 00:04:16.084 "immediate_data_pool_size": 16384, 00:04:16.084 "data_out_pool_size": 2048 00:04:16.084 } 00:04:16.084 } 00:04:16.084 ] 00:04:16.084 } 00:04:16.084 ] 00:04:16.084 } 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2658524 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2658524 ']' 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2658524 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2658524 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2658524' 00:04:16.084 killing process with pid 2658524 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2658524 00:04:16.084 17:47:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2658524 00:04:16.648 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2658670 00:04:16.648 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.648 17:47:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2658670 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2658670 ']' 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2658670 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2658670 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2658670' 00:04:21.905 killing process with pid 2658670 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2658670 00:04:21.905 17:47:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2658670 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.163 00:04:22.163 real 0m6.669s 00:04:22.163 user 0m6.280s 00:04:22.163 sys 0m0.701s 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.163 ************************************ 00:04:22.163 END TEST skip_rpc_with_json 00:04:22.163 ************************************ 00:04:22.163 17:47:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.163 17:47:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.163 17:47:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.163 17:47:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.163 ************************************ 00:04:22.163 START TEST skip_rpc_with_delay 00:04:22.163 ************************************ 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.163 [2024-07-24 17:47:08.404849] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.163 [2024-07-24 17:47:08.404977] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:22.163 00:04:22.163 real 0m0.067s 00:04:22.163 user 0m0.048s 00:04:22.163 sys 0m0.019s 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.163 17:47:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.163 ************************************ 00:04:22.163 END TEST skip_rpc_with_delay 00:04:22.163 ************************************ 00:04:22.421 17:47:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.421 17:47:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.421 17:47:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.421 17:47:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.421 17:47:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.421 17:47:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.421 ************************************ 00:04:22.421 START TEST exit_on_failed_rpc_init 00:04:22.421 ************************************ 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2659389 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2659389 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2659389 ']' 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.422 17:47:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.422 [2024-07-24 17:47:08.520286] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:22.422 [2024-07-24 17:47:08.520368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659389 ] 00:04:22.422 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.422 [2024-07-24 17:47:08.581691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.679 [2024-07-24 17:47:08.699142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.244 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.244 [2024-07-24 17:47:09.500360] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:23.244 [2024-07-24 17:47:09.500476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659524 ] 00:04:23.502 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.502 [2024-07-24 17:47:09.562314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.502 [2024-07-24 17:47:09.679965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.502 [2024-07-24 17:47:09.680100] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.502 [2024-07-24 17:47:09.680126] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.502 [2024-07-24 17:47:09.680137] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2659389 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2659389 ']' 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2659389 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2659389 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2659389' 00:04:23.760 killing process with pid 2659389 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2659389 00:04:23.760 17:47:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2659389 00:04:24.326 00:04:24.326 real 0m1.837s 00:04:24.326 user 0m2.196s 00:04:24.326 sys 0m0.485s 00:04:24.326 17:47:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.326 17:47:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.326 ************************************ 00:04:24.326 END TEST exit_on_failed_rpc_init 00:04:24.326 ************************************ 00:04:24.326 17:47:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.326 00:04:24.326 real 0m14.332s 00:04:24.326 user 0m13.808s 00:04:24.326 sys 0m1.692s 00:04:24.326 17:47:10 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.326 17:47:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.326 ************************************ 00:04:24.326 END TEST skip_rpc 00:04:24.326 ************************************ 00:04:24.326 17:47:10 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.326 17:47:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.326 17:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.326 17:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.326 ************************************ 00:04:24.326 START TEST rpc_client 00:04:24.326 ************************************ 00:04:24.326 17:47:10 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.326 * Looking for test storage... 00:04:24.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:24.326 17:47:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.326 OK 00:04:24.326 17:47:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.326 00:04:24.326 real 0m0.069s 00:04:24.326 user 0m0.027s 00:04:24.326 sys 0m0.048s 00:04:24.326 17:47:10 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.326 17:47:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.326 ************************************ 00:04:24.326 END TEST rpc_client 00:04:24.326 ************************************ 00:04:24.326 17:47:10 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.326 17:47:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.327 17:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.327 17:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.327 ************************************ 00:04:24.327 START TEST json_config 00:04:24.327 ************************************ 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.327 17:47:10 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.327 17:47:10 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.327 17:47:10 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.327 17:47:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.327 17:47:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.327 17:47:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.327 17:47:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.327 17:47:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.327 17:47:10 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:24.327 INFO: JSON configuration test init 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.327 17:47:10 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.327 17:47:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.327 17:47:10 json_config -- json_config/common.sh@10 -- # shift 00:04:24.327 17:47:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.327 17:47:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.327 17:47:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.327 17:47:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.327 17:47:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.327 17:47:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2659765 00:04:24.327 17:47:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.327 17:47:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.327 Waiting for target to run... 00:04:24.327 17:47:10 json_config -- json_config/common.sh@25 -- # waitforlisten 2659765 /var/tmp/spdk_tgt.sock 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@829 -- # '[' -z 2659765 ']' 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.327 17:47:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.327 [2024-07-24 17:47:10.593740] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:24.327 [2024-07-24 17:47:10.593825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2659765 ] 00:04:24.586 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.151 [2024-07-24 17:47:11.113204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.151 [2024-07-24 17:47:11.219422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:25.409 17:47:11 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.409 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.409 17:47:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.409 17:47:11 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:25.409 17:47:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:28.691 17:47:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.691 17:47:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:28.691 17:47:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:28.691 17:47:14 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@51 -- # sort 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:28.949 17:47:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.949 17:47:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:28.949 17:47:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.949 17:47:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:28.949 17:47:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:28.949 17:47:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.206 MallocForNvmf0 00:04:29.206 17:47:15 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.206 17:47:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.463 MallocForNvmf1 00:04:29.463 17:47:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.463 17:47:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.720 [2024-07-24 17:47:15.771470] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.720 17:47:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.720 17:47:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.976 17:47:16 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:29.976 17:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.233 17:47:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.233 17:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.491 17:47:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.491 17:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.491 [2024-07-24 17:47:16.734666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.491 17:47:16 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:30.491 17:47:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.491 17:47:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.749 17:47:16 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:30.749 17:47:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.749 17:47:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.749 17:47:16 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:30.749 17:47:16 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.749 17:47:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.006 MallocBdevForConfigChangeCheck 00:04:31.006 17:47:17 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:31.006 17:47:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.006 17:47:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.006 17:47:17 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:31.006 17:47:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.263 17:47:17 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:31.263 INFO: shutting down applications... 00:04:31.263 17:47:17 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:31.263 17:47:17 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:31.263 17:47:17 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:31.263 17:47:17 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.160 Calling clear_iscsi_subsystem 00:04:33.160 Calling clear_nvmf_subsystem 00:04:33.160 Calling clear_nbd_subsystem 00:04:33.160 Calling clear_ublk_subsystem 00:04:33.160 Calling clear_vhost_blk_subsystem 00:04:33.160 Calling clear_vhost_scsi_subsystem 00:04:33.160 Calling clear_bdev_subsystem 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.160 17:47:18 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.160 17:47:19 json_config -- json_config/json_config.sh@349 -- # break 00:04:33.160 17:47:19 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:33.160 17:47:19 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:33.160 17:47:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.160 17:47:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.160 17:47:19 json_config -- json_config/common.sh@35 -- # [[ -n 2659765 ]] 00:04:33.160 17:47:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2659765 00:04:33.160 17:47:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.160 17:47:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.160 17:47:19 json_config -- json_config/common.sh@41 -- # kill -0 2659765 00:04:33.160 17:47:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.726 17:47:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.726 17:47:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.726 17:47:19 json_config -- json_config/common.sh@41 -- # kill -0 2659765 00:04:33.726 17:47:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.726 17:47:19 json_config -- json_config/common.sh@43 -- # break 00:04:33.726 17:47:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.726 17:47:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.726 SPDK target shutdown done 00:04:33.726 17:47:19 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:33.726 INFO: relaunching applications... 00:04:33.726 17:47:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.726 17:47:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.726 17:47:19 json_config -- json_config/common.sh@10 -- # shift 00:04:33.726 17:47:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.726 17:47:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.726 17:47:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.726 17:47:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.726 17:47:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.726 17:47:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2660962 00:04:33.726 17:47:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.726 17:47:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.726 Waiting for target to run... 00:04:33.726 17:47:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2660962 /var/tmp/spdk_tgt.sock 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 2660962 ']' 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.726 17:47:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.726 [2024-07-24 17:47:19.939232] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:33.726 [2024-07-24 17:47:19.939320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660962 ] 00:04:33.726 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.292 [2024-07-24 17:47:20.479505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.550 [2024-07-24 17:47:20.588316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.866 [2024-07-24 17:47:23.632291] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.866 [2024-07-24 17:47:23.664757] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.123 17:47:24 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.123 17:47:24 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:38.123 17:47:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.123 00:04:38.123 17:47:24 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:38.123 17:47:24 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.123 INFO: Checking if target configuration is the same... 00:04:38.123 17:47:24 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.123 17:47:24 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:38.123 17:47:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.123 + '[' 2 -ne 2 ']' 00:04:38.123 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.123 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.123 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.123 +++ basename /dev/fd/62 00:04:38.123 ++ mktemp /tmp/62.XXX 00:04:38.123 + tmp_file_1=/tmp/62.yzs 00:04:38.123 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.123 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.123 + tmp_file_2=/tmp/spdk_tgt_config.json.Het 00:04:38.123 + ret=0 00:04:38.123 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.687 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.687 + diff -u /tmp/62.yzs /tmp/spdk_tgt_config.json.Het 00:04:38.687 + echo 'INFO: JSON config files are the same' 00:04:38.687 INFO: JSON config files are the same 00:04:38.687 + rm /tmp/62.yzs /tmp/spdk_tgt_config.json.Het 00:04:38.687 + exit 0 00:04:38.687 17:47:24 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:38.687 17:47:24 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:38.687 INFO: changing configuration and checking if this can be detected... 00:04:38.687 17:47:24 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.687 17:47:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.944 17:47:25 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.944 17:47:25 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:38.944 17:47:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.944 + '[' 2 -ne 2 ']' 00:04:38.944 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.944 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.944 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.944 +++ basename /dev/fd/62 00:04:38.944 ++ mktemp /tmp/62.XXX 00:04:38.944 + tmp_file_1=/tmp/62.daG 00:04:38.944 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.944 + tmp_file_2=/tmp/spdk_tgt_config.json.380 00:04:38.944 + ret=0 00:04:38.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.202 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.202 + diff -u /tmp/62.daG /tmp/spdk_tgt_config.json.380 00:04:39.202 + ret=1 00:04:39.202 + echo '=== Start of file: /tmp/62.daG ===' 00:04:39.202 + cat /tmp/62.daG 00:04:39.202 + echo '=== End of file: /tmp/62.daG ===' 00:04:39.202 + echo '' 00:04:39.202 + echo '=== Start of file: /tmp/spdk_tgt_config.json.380 ===' 00:04:39.202 + cat /tmp/spdk_tgt_config.json.380 00:04:39.202 + echo '=== End of file: /tmp/spdk_tgt_config.json.380 ===' 00:04:39.202 + echo '' 00:04:39.202 + rm /tmp/62.daG /tmp/spdk_tgt_config.json.380 00:04:39.202 + exit 1 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:39.202 INFO: configuration change detected. 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@321 -- # [[ -n 2660962 ]] 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:39.202 17:47:25 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.202 17:47:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.460 17:47:25 json_config -- json_config/json_config.sh@327 -- # killprocess 2660962 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@948 -- # '[' -z 2660962 ']' 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@952 -- # kill -0 2660962 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@953 -- # uname 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2660962 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2660962' 00:04:39.460 killing process with pid 2660962 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@967 -- # kill 2660962 00:04:39.460 17:47:25 json_config -- common/autotest_common.sh@972 -- # wait 2660962 00:04:41.357 17:47:27 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.357 17:47:27 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:41.357 17:47:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.358 17:47:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.358 17:47:27 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:41.358 17:47:27 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:41.358 INFO: Success 00:04:41.358 00:04:41.358 real 0m16.672s 00:04:41.358 user 0m18.428s 00:04:41.358 sys 0m2.276s 00:04:41.358 17:47:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.358 17:47:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.358 ************************************ 00:04:41.358 END TEST json_config 00:04:41.358 ************************************ 00:04:41.358 17:47:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.358 17:47:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.358 17:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.358 17:47:27 -- common/autotest_common.sh@10 -- # set +x 00:04:41.358 ************************************ 00:04:41.358 START TEST json_config_extra_key 00:04:41.358 ************************************ 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.358 17:47:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.358 17:47:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.358 17:47:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.358 17:47:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.358 17:47:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.358 17:47:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.358 17:47:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.358 17:47:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.358 17:47:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.358 INFO: launching applications... 00:04:41.358 17:47:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2662007 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.358 Waiting for target to run... 00:04:41.358 17:47:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2662007 /var/tmp/spdk_tgt.sock 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2662007 ']' 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.358 17:47:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.358 [2024-07-24 17:47:27.317930] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:41.358 [2024-07-24 17:47:27.318022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662007 ] 00:04:41.358 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.616 [2024-07-24 17:47:27.655261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.616 [2024-07-24 17:47:27.743983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.180 17:47:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.180 17:47:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.180 00:04:42.180 17:47:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.180 INFO: shutting down applications... 00:04:42.180 17:47:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2662007 ]] 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2662007 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2662007 00:04:42.180 17:47:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.745 17:47:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.745 17:47:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.745 17:47:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2662007 00:04:42.745 17:47:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2662007 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.311 17:47:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.311 SPDK target shutdown done 00:04:43.311 17:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.311 Success 00:04:43.311 00:04:43.311 real 0m2.081s 00:04:43.311 user 0m1.632s 00:04:43.311 sys 0m0.443s 00:04:43.311 17:47:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.311 17:47:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.311 ************************************ 00:04:43.311 END TEST json_config_extra_key 00:04:43.311 ************************************ 00:04:43.311 17:47:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.311 17:47:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.311 17:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.311 17:47:29 -- common/autotest_common.sh@10 -- # set +x 00:04:43.311 ************************************ 00:04:43.311 START TEST alias_rpc 00:04:43.311 ************************************ 00:04:43.311 17:47:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.311 * Looking for test storage... 00:04:43.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.311 17:47:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.311 17:47:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2662321 00:04:43.311 17:47:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.311 17:47:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2662321 00:04:43.311 17:47:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2662321 ']' 00:04:43.311 17:47:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.311 17:47:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.311 17:47:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.312 17:47:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.312 17:47:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.312 [2024-07-24 17:47:29.435942] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:43.312 [2024-07-24 17:47:29.436022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662321 ] 00:04:43.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.312 [2024-07-24 17:47:29.492430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.569 [2024-07-24 17:47:29.606142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.827 17:47:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.827 17:47:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.827 17:47:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:44.084 17:47:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2662321 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2662321 ']' 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2662321 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662321 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662321' 00:04:44.084 killing process with pid 2662321 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@967 -- # kill 2662321 00:04:44.084 17:47:30 alias_rpc -- common/autotest_common.sh@972 -- # wait 2662321 00:04:44.649 00:04:44.649 real 0m1.297s 00:04:44.649 user 0m1.381s 00:04:44.649 sys 0m0.427s 00:04:44.649 17:47:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.649 17:47:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.649 ************************************ 00:04:44.649 END TEST alias_rpc 00:04:44.649 ************************************ 00:04:44.649 17:47:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:44.649 17:47:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.649 17:47:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.649 17:47:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.649 17:47:30 -- common/autotest_common.sh@10 -- # set +x 00:04:44.649 ************************************ 00:04:44.649 START TEST spdkcli_tcp 00:04:44.649 ************************************ 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.649 * Looking for test storage... 00:04:44.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2662507 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.649 17:47:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2662507 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2662507 ']' 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.649 17:47:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.649 [2024-07-24 17:47:30.789721] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:44.649 [2024-07-24 17:47:30.789813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662507 ] 00:04:44.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.649 [2024-07-24 17:47:30.845313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.907 [2024-07-24 17:47:30.953637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.907 [2024-07-24 17:47:30.953641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.164 17:47:31 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.164 17:47:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:45.164 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2662521 00:04:45.164 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.164 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.422 [ 00:04:45.422 "bdev_malloc_delete", 00:04:45.422 "bdev_malloc_create", 00:04:45.422 "bdev_null_resize", 00:04:45.422 "bdev_null_delete", 00:04:45.422 "bdev_null_create", 00:04:45.422 "bdev_nvme_cuse_unregister", 00:04:45.422 "bdev_nvme_cuse_register", 00:04:45.422 "bdev_opal_new_user", 00:04:45.422 "bdev_opal_set_lock_state", 00:04:45.422 "bdev_opal_delete", 00:04:45.422 "bdev_opal_get_info", 00:04:45.422 "bdev_opal_create", 00:04:45.422 "bdev_nvme_opal_revert", 00:04:45.422 "bdev_nvme_opal_init", 00:04:45.422 "bdev_nvme_send_cmd", 00:04:45.422 "bdev_nvme_get_path_iostat", 00:04:45.422 "bdev_nvme_get_mdns_discovery_info", 00:04:45.422 "bdev_nvme_stop_mdns_discovery", 00:04:45.422 "bdev_nvme_start_mdns_discovery", 00:04:45.422 "bdev_nvme_set_multipath_policy", 00:04:45.422 "bdev_nvme_set_preferred_path", 00:04:45.422 "bdev_nvme_get_io_paths", 00:04:45.422 "bdev_nvme_remove_error_injection", 00:04:45.422 "bdev_nvme_add_error_injection", 00:04:45.422 "bdev_nvme_get_discovery_info", 00:04:45.422 "bdev_nvme_stop_discovery", 00:04:45.422 "bdev_nvme_start_discovery", 00:04:45.422 "bdev_nvme_get_controller_health_info", 00:04:45.422 "bdev_nvme_disable_controller", 00:04:45.422 "bdev_nvme_enable_controller", 00:04:45.422 "bdev_nvme_reset_controller", 00:04:45.422 "bdev_nvme_get_transport_statistics", 00:04:45.422 "bdev_nvme_apply_firmware", 00:04:45.422 "bdev_nvme_detach_controller", 00:04:45.422 "bdev_nvme_get_controllers", 00:04:45.422 "bdev_nvme_attach_controller", 00:04:45.422 "bdev_nvme_set_hotplug", 00:04:45.422 "bdev_nvme_set_options", 00:04:45.422 "bdev_passthru_delete", 00:04:45.422 "bdev_passthru_create", 00:04:45.422 "bdev_lvol_set_parent_bdev", 00:04:45.422 "bdev_lvol_set_parent", 00:04:45.422 "bdev_lvol_check_shallow_copy", 00:04:45.422 "bdev_lvol_start_shallow_copy", 00:04:45.422 "bdev_lvol_grow_lvstore", 00:04:45.422 "bdev_lvol_get_lvols", 00:04:45.422 "bdev_lvol_get_lvstores", 00:04:45.422 "bdev_lvol_delete", 00:04:45.422 "bdev_lvol_set_read_only", 00:04:45.422 "bdev_lvol_resize", 00:04:45.422 "bdev_lvol_decouple_parent", 00:04:45.422 "bdev_lvol_inflate", 00:04:45.422 "bdev_lvol_rename", 00:04:45.422 "bdev_lvol_clone_bdev", 00:04:45.422 "bdev_lvol_clone", 00:04:45.422 "bdev_lvol_snapshot", 00:04:45.422 "bdev_lvol_create", 00:04:45.422 "bdev_lvol_delete_lvstore", 00:04:45.422 "bdev_lvol_rename_lvstore", 00:04:45.422 "bdev_lvol_create_lvstore", 00:04:45.422 "bdev_raid_set_options", 00:04:45.422 "bdev_raid_remove_base_bdev", 00:04:45.422 "bdev_raid_add_base_bdev", 00:04:45.422 "bdev_raid_delete", 00:04:45.422 "bdev_raid_create", 00:04:45.422 "bdev_raid_get_bdevs", 00:04:45.422 "bdev_error_inject_error", 00:04:45.422 "bdev_error_delete", 00:04:45.423 "bdev_error_create", 00:04:45.423 "bdev_split_delete", 00:04:45.423 "bdev_split_create", 00:04:45.423 "bdev_delay_delete", 00:04:45.423 "bdev_delay_create", 00:04:45.423 "bdev_delay_update_latency", 00:04:45.423 "bdev_zone_block_delete", 00:04:45.423 "bdev_zone_block_create", 00:04:45.423 "blobfs_create", 00:04:45.423 "blobfs_detect", 00:04:45.423 "blobfs_set_cache_size", 00:04:45.423 "bdev_aio_delete", 00:04:45.423 "bdev_aio_rescan", 00:04:45.423 "bdev_aio_create", 00:04:45.423 "bdev_ftl_set_property", 00:04:45.423 "bdev_ftl_get_properties", 00:04:45.423 "bdev_ftl_get_stats", 00:04:45.423 "bdev_ftl_unmap", 00:04:45.423 "bdev_ftl_unload", 00:04:45.423 "bdev_ftl_delete", 00:04:45.423 "bdev_ftl_load", 00:04:45.423 "bdev_ftl_create", 00:04:45.423 "bdev_virtio_attach_controller", 00:04:45.423 "bdev_virtio_scsi_get_devices", 00:04:45.423 "bdev_virtio_detach_controller", 00:04:45.423 "bdev_virtio_blk_set_hotplug", 00:04:45.423 "bdev_iscsi_delete", 00:04:45.423 "bdev_iscsi_create", 00:04:45.423 "bdev_iscsi_set_options", 00:04:45.423 "accel_error_inject_error", 00:04:45.423 "ioat_scan_accel_module", 00:04:45.423 "dsa_scan_accel_module", 00:04:45.423 "iaa_scan_accel_module", 00:04:45.423 "vfu_virtio_create_scsi_endpoint", 00:04:45.423 "vfu_virtio_scsi_remove_target", 00:04:45.423 "vfu_virtio_scsi_add_target", 00:04:45.423 "vfu_virtio_create_blk_endpoint", 00:04:45.423 "vfu_virtio_delete_endpoint", 00:04:45.423 "keyring_file_remove_key", 00:04:45.423 "keyring_file_add_key", 00:04:45.423 "keyring_linux_set_options", 00:04:45.423 "iscsi_get_histogram", 00:04:45.423 "iscsi_enable_histogram", 00:04:45.423 "iscsi_set_options", 00:04:45.423 "iscsi_get_auth_groups", 00:04:45.423 "iscsi_auth_group_remove_secret", 00:04:45.423 "iscsi_auth_group_add_secret", 00:04:45.423 "iscsi_delete_auth_group", 00:04:45.423 "iscsi_create_auth_group", 00:04:45.423 "iscsi_set_discovery_auth", 00:04:45.423 "iscsi_get_options", 00:04:45.423 "iscsi_target_node_request_logout", 00:04:45.423 "iscsi_target_node_set_redirect", 00:04:45.423 "iscsi_target_node_set_auth", 00:04:45.423 "iscsi_target_node_add_lun", 00:04:45.423 "iscsi_get_stats", 00:04:45.423 "iscsi_get_connections", 00:04:45.423 "iscsi_portal_group_set_auth", 00:04:45.423 "iscsi_start_portal_group", 00:04:45.423 "iscsi_delete_portal_group", 00:04:45.423 "iscsi_create_portal_group", 00:04:45.423 "iscsi_get_portal_groups", 00:04:45.423 "iscsi_delete_target_node", 00:04:45.423 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.423 "iscsi_target_node_add_pg_ig_maps", 00:04:45.423 "iscsi_create_target_node", 00:04:45.423 "iscsi_get_target_nodes", 00:04:45.423 "iscsi_delete_initiator_group", 00:04:45.423 "iscsi_initiator_group_remove_initiators", 00:04:45.423 "iscsi_initiator_group_add_initiators", 00:04:45.423 "iscsi_create_initiator_group", 00:04:45.423 "iscsi_get_initiator_groups", 00:04:45.423 "nvmf_set_crdt", 00:04:45.423 "nvmf_set_config", 00:04:45.423 "nvmf_set_max_subsystems", 00:04:45.423 "nvmf_stop_mdns_prr", 00:04:45.423 "nvmf_publish_mdns_prr", 00:04:45.423 "nvmf_subsystem_get_listeners", 00:04:45.423 "nvmf_subsystem_get_qpairs", 00:04:45.423 "nvmf_subsystem_get_controllers", 00:04:45.423 "nvmf_get_stats", 00:04:45.423 "nvmf_get_transports", 00:04:45.423 "nvmf_create_transport", 00:04:45.423 "nvmf_get_targets", 00:04:45.423 "nvmf_delete_target", 00:04:45.423 "nvmf_create_target", 00:04:45.423 "nvmf_subsystem_allow_any_host", 00:04:45.423 "nvmf_subsystem_remove_host", 00:04:45.423 "nvmf_subsystem_add_host", 00:04:45.423 "nvmf_ns_remove_host", 00:04:45.423 "nvmf_ns_add_host", 00:04:45.423 "nvmf_subsystem_remove_ns", 00:04:45.423 "nvmf_subsystem_add_ns", 00:04:45.423 "nvmf_subsystem_listener_set_ana_state", 00:04:45.423 "nvmf_discovery_get_referrals", 00:04:45.423 "nvmf_discovery_remove_referral", 00:04:45.423 "nvmf_discovery_add_referral", 00:04:45.423 "nvmf_subsystem_remove_listener", 00:04:45.423 "nvmf_subsystem_add_listener", 00:04:45.423 "nvmf_delete_subsystem", 00:04:45.423 "nvmf_create_subsystem", 00:04:45.423 "nvmf_get_subsystems", 00:04:45.423 "env_dpdk_get_mem_stats", 00:04:45.423 "nbd_get_disks", 00:04:45.423 "nbd_stop_disk", 00:04:45.423 "nbd_start_disk", 00:04:45.423 "ublk_recover_disk", 00:04:45.423 "ublk_get_disks", 00:04:45.423 "ublk_stop_disk", 00:04:45.423 "ublk_start_disk", 00:04:45.423 "ublk_destroy_target", 00:04:45.423 "ublk_create_target", 00:04:45.423 "virtio_blk_create_transport", 00:04:45.423 "virtio_blk_get_transports", 00:04:45.423 "vhost_controller_set_coalescing", 00:04:45.423 "vhost_get_controllers", 00:04:45.423 "vhost_delete_controller", 00:04:45.423 "vhost_create_blk_controller", 00:04:45.423 "vhost_scsi_controller_remove_target", 00:04:45.423 "vhost_scsi_controller_add_target", 00:04:45.423 "vhost_start_scsi_controller", 00:04:45.423 "vhost_create_scsi_controller", 00:04:45.423 "thread_set_cpumask", 00:04:45.423 "framework_get_governor", 00:04:45.423 "framework_get_scheduler", 00:04:45.423 "framework_set_scheduler", 00:04:45.423 "framework_get_reactors", 00:04:45.423 "thread_get_io_channels", 00:04:45.423 "thread_get_pollers", 00:04:45.423 "thread_get_stats", 00:04:45.423 "framework_monitor_context_switch", 00:04:45.423 "spdk_kill_instance", 00:04:45.423 "log_enable_timestamps", 00:04:45.423 "log_get_flags", 00:04:45.423 "log_clear_flag", 00:04:45.423 "log_set_flag", 00:04:45.423 "log_get_level", 00:04:45.423 "log_set_level", 00:04:45.423 "log_get_print_level", 00:04:45.423 "log_set_print_level", 00:04:45.423 "framework_enable_cpumask_locks", 00:04:45.423 "framework_disable_cpumask_locks", 00:04:45.423 "framework_wait_init", 00:04:45.423 "framework_start_init", 00:04:45.423 "scsi_get_devices", 00:04:45.423 "bdev_get_histogram", 00:04:45.423 "bdev_enable_histogram", 00:04:45.423 "bdev_set_qos_limit", 00:04:45.423 "bdev_set_qd_sampling_period", 00:04:45.423 "bdev_get_bdevs", 00:04:45.423 "bdev_reset_iostat", 00:04:45.423 "bdev_get_iostat", 00:04:45.423 "bdev_examine", 00:04:45.423 "bdev_wait_for_examine", 00:04:45.423 "bdev_set_options", 00:04:45.423 "notify_get_notifications", 00:04:45.423 "notify_get_types", 00:04:45.423 "accel_get_stats", 00:04:45.423 "accel_set_options", 00:04:45.423 "accel_set_driver", 00:04:45.423 "accel_crypto_key_destroy", 00:04:45.423 "accel_crypto_keys_get", 00:04:45.423 "accel_crypto_key_create", 00:04:45.423 "accel_assign_opc", 00:04:45.423 "accel_get_module_info", 00:04:45.423 "accel_get_opc_assignments", 00:04:45.423 "vmd_rescan", 00:04:45.423 "vmd_remove_device", 00:04:45.423 "vmd_enable", 00:04:45.423 "sock_get_default_impl", 00:04:45.423 "sock_set_default_impl", 00:04:45.423 "sock_impl_set_options", 00:04:45.423 "sock_impl_get_options", 00:04:45.423 "iobuf_get_stats", 00:04:45.423 "iobuf_set_options", 00:04:45.423 "keyring_get_keys", 00:04:45.423 "framework_get_pci_devices", 00:04:45.423 "framework_get_config", 00:04:45.423 "framework_get_subsystems", 00:04:45.423 "vfu_tgt_set_base_path", 00:04:45.423 "trace_get_info", 00:04:45.423 "trace_get_tpoint_group_mask", 00:04:45.423 "trace_disable_tpoint_group", 00:04:45.423 "trace_enable_tpoint_group", 00:04:45.423 "trace_clear_tpoint_mask", 00:04:45.423 "trace_set_tpoint_mask", 00:04:45.423 "spdk_get_version", 00:04:45.423 "rpc_get_methods" 00:04:45.423 ] 00:04:45.423 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.423 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.423 17:47:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2662507 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2662507 ']' 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2662507 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662507 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662507' 00:04:45.423 killing process with pid 2662507 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2662507 00:04:45.423 17:47:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2662507 00:04:45.989 00:04:45.989 real 0m1.286s 00:04:45.989 user 0m2.243s 00:04:45.989 sys 0m0.440s 00:04:45.989 17:47:31 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.989 17:47:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.989 ************************************ 00:04:45.989 END TEST spdkcli_tcp 00:04:45.989 ************************************ 00:04:45.990 17:47:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.990 17:47:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.990 17:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.990 17:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:45.990 ************************************ 00:04:45.990 START TEST dpdk_mem_utility 00:04:45.990 ************************************ 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.990 * Looking for test storage... 00:04:45.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:45.990 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:45.990 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2662707 00:04:45.990 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.990 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2662707 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2662707 ']' 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.990 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.990 [2024-07-24 17:47:32.121378] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:45.990 [2024-07-24 17:47:32.121469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662707 ] 00:04:45.990 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.990 [2024-07-24 17:47:32.179134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.248 [2024-07-24 17:47:32.287341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.506 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.506 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:46.506 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.506 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.506 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.506 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.506 { 00:04:46.506 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.506 } 00:04:46.506 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.506 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.506 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:46.506 1 heaps totaling size 814.000000 MiB 00:04:46.506 size: 814.000000 MiB heap id: 0 00:04:46.506 end heaps---------- 00:04:46.506 8 mempools totaling size 598.116089 MiB 00:04:46.506 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.506 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.506 size: 84.521057 MiB name: bdev_io_2662707 00:04:46.506 size: 51.011292 MiB name: evtpool_2662707 00:04:46.506 size: 50.003479 MiB name: msgpool_2662707 00:04:46.506 size: 21.763794 MiB name: PDU_Pool 00:04:46.506 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.506 size: 0.026123 MiB name: Session_Pool 00:04:46.506 end mempools------- 00:04:46.506 6 memzones totaling size 4.142822 MiB 00:04:46.506 size: 1.000366 MiB name: RG_ring_0_2662707 00:04:46.506 size: 1.000366 MiB name: RG_ring_1_2662707 00:04:46.506 size: 1.000366 MiB name: RG_ring_4_2662707 00:04:46.506 size: 1.000366 MiB name: RG_ring_5_2662707 00:04:46.506 size: 0.125366 MiB name: RG_ring_2_2662707 00:04:46.506 size: 0.015991 MiB name: RG_ring_3_2662707 00:04:46.506 end memzones------- 00:04:46.506 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.506 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:46.506 list of free elements. size: 12.519348 MiB 00:04:46.506 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:46.506 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:46.506 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:46.506 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:46.506 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:46.506 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:46.506 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:46.506 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:46.506 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:46.506 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:46.506 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:46.506 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:46.506 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:46.506 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:46.506 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:46.506 list of standard malloc elements. size: 199.218079 MiB 00:04:46.506 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:46.506 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:46.506 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:46.506 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:46.506 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:46.506 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:46.506 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:46.506 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:46.506 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:46.506 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:46.506 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:46.506 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:46.506 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:46.506 list of memzone associated elements. size: 602.262573 MiB 00:04:46.506 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:46.506 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.506 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:46.506 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.506 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:46.506 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2662707_0 00:04:46.506 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:46.507 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2662707_0 00:04:46.507 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:46.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2662707_0 00:04:46.507 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:46.507 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.507 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:46.507 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.507 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:46.507 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2662707 00:04:46.507 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:46.507 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2662707 00:04:46.507 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:46.507 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2662707 00:04:46.507 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:46.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.507 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:46.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.507 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:46.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.507 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:46.507 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.507 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:46.507 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2662707 00:04:46.507 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:46.507 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2662707 00:04:46.507 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:46.507 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2662707 00:04:46.507 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:46.507 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2662707 00:04:46.507 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:46.507 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2662707 00:04:46.507 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:46.507 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.507 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:46.507 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.507 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:46.507 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.507 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:46.507 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2662707 00:04:46.507 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:46.507 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.507 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:46.507 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.507 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:46.507 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2662707 00:04:46.507 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:46.507 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.507 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:46.507 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2662707 00:04:46.507 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:46.507 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2662707 00:04:46.507 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:46.507 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.507 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.507 17:47:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2662707 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2662707 ']' 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2662707 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662707 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662707' 00:04:46.507 killing process with pid 2662707 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2662707 00:04:46.507 17:47:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2662707 00:04:47.073 00:04:47.073 real 0m1.157s 00:04:47.073 user 0m1.113s 00:04:47.073 sys 0m0.410s 00:04:47.073 17:47:33 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.073 17:47:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 ************************************ 00:04:47.073 END TEST dpdk_mem_utility 00:04:47.073 ************************************ 00:04:47.073 17:47:33 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.073 17:47:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.073 17:47:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.073 17:47:33 -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 ************************************ 00:04:47.073 START TEST event 00:04:47.073 ************************************ 00:04:47.073 17:47:33 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.073 * Looking for test storage... 00:04:47.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.073 17:47:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:47.073 17:47:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.073 17:47:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.073 17:47:33 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:47.073 17:47:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.073 17:47:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.073 ************************************ 00:04:47.073 START TEST event_perf 00:04:47.073 ************************************ 00:04:47.073 17:47:33 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.073 Running I/O for 1 seconds...[2024-07-24 17:47:33.315703] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:47.073 [2024-07-24 17:47:33.315769] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662904 ] 00:04:47.330 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.330 [2024-07-24 17:47:33.378509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.330 [2024-07-24 17:47:33.499821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.330 [2024-07-24 17:47:33.499883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.330 [2024-07-24 17:47:33.499974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.330 [2024-07-24 17:47:33.499978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.701 Running I/O for 1 seconds... 00:04:48.701 lcore 0: 229840 00:04:48.701 lcore 1: 229838 00:04:48.701 lcore 2: 229839 00:04:48.701 lcore 3: 229839 00:04:48.701 done. 00:04:48.701 00:04:48.701 real 0m1.320s 00:04:48.701 user 0m4.226s 00:04:48.701 sys 0m0.090s 00:04:48.701 17:47:34 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.701 17:47:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.701 ************************************ 00:04:48.701 END TEST event_perf 00:04:48.701 ************************************ 00:04:48.701 17:47:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.701 17:47:34 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:48.701 17:47:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.701 17:47:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.701 ************************************ 00:04:48.701 START TEST event_reactor 00:04:48.701 ************************************ 00:04:48.701 17:47:34 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.701 [2024-07-24 17:47:34.679006] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:48.701 [2024-07-24 17:47:34.679067] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663063 ] 00:04:48.702 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.702 [2024-07-24 17:47:34.741090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.702 [2024-07-24 17:47:34.862723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.075 test_start 00:04:50.075 oneshot 00:04:50.075 tick 100 00:04:50.075 tick 100 00:04:50.075 tick 250 00:04:50.075 tick 100 00:04:50.075 tick 100 00:04:50.075 tick 100 00:04:50.075 tick 250 00:04:50.075 tick 500 00:04:50.075 tick 100 00:04:50.075 tick 100 00:04:50.075 tick 250 00:04:50.075 tick 100 00:04:50.075 tick 100 00:04:50.075 test_end 00:04:50.075 00:04:50.075 real 0m1.315s 00:04:50.075 user 0m1.228s 00:04:50.075 sys 0m0.083s 00:04:50.075 17:47:35 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.075 17:47:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:50.075 ************************************ 00:04:50.075 END TEST event_reactor 00:04:50.075 ************************************ 00:04:50.075 17:47:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.075 17:47:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:50.075 17:47:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.075 17:47:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.075 ************************************ 00:04:50.075 START TEST event_reactor_perf 00:04:50.075 ************************************ 00:04:50.075 17:47:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.075 [2024-07-24 17:47:36.035563] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:50.075 [2024-07-24 17:47:36.035624] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663334 ] 00:04:50.075 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.075 [2024-07-24 17:47:36.097844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.075 [2024-07-24 17:47:36.218752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.447 test_start 00:04:51.447 test_end 00:04:51.447 Performance: 356815 events per second 00:04:51.447 00:04:51.447 real 0m1.319s 00:04:51.447 user 0m1.230s 00:04:51.447 sys 0m0.084s 00:04:51.447 17:47:37 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.447 17:47:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.447 ************************************ 00:04:51.447 END TEST event_reactor_perf 00:04:51.447 ************************************ 00:04:51.447 17:47:37 event -- event/event.sh@49 -- # uname -s 00:04:51.447 17:47:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:51.447 17:47:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.448 17:47:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.448 17:47:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.448 17:47:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.448 ************************************ 00:04:51.448 START TEST event_scheduler 00:04:51.448 ************************************ 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.448 * Looking for test storage... 00:04:51.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2663522 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2663522 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2663522 ']' 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.448 [2024-07-24 17:47:37.487111] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:51.448 [2024-07-24 17:47:37.487199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663522 ] 00:04:51.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.448 [2024-07-24 17:47:37.543883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.448 [2024-07-24 17:47:37.653632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.448 [2024-07-24 17:47:37.653698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.448 [2024-07-24 17:47:37.653761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.448 [2024-07-24 17:47:37.653764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.448 [2024-07-24 17:47:37.702516] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:51.448 [2024-07-24 17:47:37.702541] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:51.448 [2024-07-24 17:47:37.702572] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.448 [2024-07-24 17:47:37.702584] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.448 [2024-07-24 17:47:37.702595] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.448 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.448 17:47:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 [2024-07-24 17:47:37.800735] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:51.706 17:47:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:51.706 17:47:37 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.706 17:47:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 ************************************ 00:04:51.706 START TEST scheduler_create_thread 00:04:51.706 ************************************ 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 2 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 3 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 4 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 5 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 6 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 7 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 8 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 9 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.706 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.706 10 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.707 17:47:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.271 17:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.271 00:04:52.271 real 0m0.591s 00:04:52.271 user 0m0.011s 00:04:52.271 sys 0m0.002s 00:04:52.271 17:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.271 17:47:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.271 ************************************ 00:04:52.271 END TEST scheduler_create_thread 00:04:52.271 ************************************ 00:04:52.271 17:47:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.271 17:47:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2663522 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2663522 ']' 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2663522 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663522 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663522' 00:04:52.271 killing process with pid 2663522 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2663522 00:04:52.271 17:47:38 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2663522 00:04:52.836 [2024-07-24 17:47:38.900994] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.094 00:04:53.094 real 0m1.769s 00:04:53.094 user 0m2.255s 00:04:53.094 sys 0m0.324s 00:04:53.094 17:47:39 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.094 17:47:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.094 ************************************ 00:04:53.094 END TEST event_scheduler 00:04:53.094 ************************************ 00:04:53.094 17:47:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.094 17:47:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.094 17:47:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.094 17:47:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.094 17:47:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.094 ************************************ 00:04:53.094 START TEST app_repeat 00:04:53.094 ************************************ 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2663716 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2663716' 00:04:53.094 Process app_repeat pid: 2663716 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.094 spdk_app_start Round 0 00:04:53.094 17:47:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663716 /var/tmp/spdk-nbd.sock 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2663716 ']' 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.094 17:47:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.094 [2024-07-24 17:47:39.236706] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:04:53.094 [2024-07-24 17:47:39.236780] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663716 ] 00:04:53.094 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.094 [2024-07-24 17:47:39.302641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.352 [2024-07-24 17:47:39.429130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.352 [2024-07-24 17:47:39.429140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.352 17:47:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.352 17:47:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:53.352 17:47:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.610 Malloc0 00:04:53.610 17:47:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.868 Malloc1 00:04:53.868 17:47:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.868 17:47:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.126 /dev/nbd0 00:04:54.126 17:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.126 17:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.126 1+0 records in 00:04:54.126 1+0 records out 00:04:54.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173386 s, 23.6 MB/s 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.126 17:47:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.126 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.126 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.126 17:47:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.384 /dev/nbd1 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.384 1+0 records in 00:04:54.384 1+0 records out 00:04:54.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231207 s, 17.7 MB/s 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.384 17:47:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.384 17:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.670 { 00:04:54.670 "nbd_device": "/dev/nbd0", 00:04:54.670 "bdev_name": "Malloc0" 00:04:54.670 }, 00:04:54.670 { 00:04:54.670 "nbd_device": "/dev/nbd1", 00:04:54.670 "bdev_name": "Malloc1" 00:04:54.670 } 00:04:54.670 ]' 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.670 { 00:04:54.670 "nbd_device": "/dev/nbd0", 00:04:54.670 "bdev_name": "Malloc0" 00:04:54.670 }, 00:04:54.670 { 00:04:54.670 "nbd_device": "/dev/nbd1", 00:04:54.670 "bdev_name": "Malloc1" 00:04:54.670 } 00:04:54.670 ]' 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.670 /dev/nbd1' 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.670 /dev/nbd1' 00:04:54.670 17:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.927 256+0 records in 00:04:54.927 256+0 records out 00:04:54.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516437 s, 203 MB/s 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.927 256+0 records in 00:04:54.927 256+0 records out 00:04:54.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242423 s, 43.3 MB/s 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.927 256+0 records in 00:04:54.927 256+0 records out 00:04:54.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263933 s, 39.7 MB/s 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.927 17:47:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.185 17:47:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.442 17:47:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.700 17:47:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.700 17:47:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.958 17:47:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.216 [2024-07-24 17:47:42.371599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.474 [2024-07-24 17:47:42.489249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.474 [2024-07-24 17:47:42.489249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.474 [2024-07-24 17:47:42.552505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.474 [2024-07-24 17:47:42.552573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.999 17:47:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.999 17:47:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.999 spdk_app_start Round 1 00:04:58.999 17:47:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663716 /var/tmp/spdk-nbd.sock 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2663716 ']' 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.999 17:47:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.256 17:47:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.256 17:47:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:59.256 17:47:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.514 Malloc0 00:04:59.514 17:47:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.772 Malloc1 00:04:59.772 17:47:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.772 17:47:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.030 /dev/nbd0 00:05:00.030 17:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.030 17:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.030 1+0 records in 00:05:00.030 1+0 records out 00:05:00.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147437 s, 27.8 MB/s 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:00.030 17:47:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:00.030 17:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.030 17:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.030 17:47:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.332 /dev/nbd1 00:05:00.332 17:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.332 17:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.332 1+0 records in 00:05:00.332 1+0 records out 00:05:00.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210901 s, 19.4 MB/s 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:00.332 17:47:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:00.332 17:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.332 17:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.332 17:47:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.333 17:47:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.333 17:47:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.590 { 00:05:00.590 "nbd_device": "/dev/nbd0", 00:05:00.590 "bdev_name": "Malloc0" 00:05:00.590 }, 00:05:00.590 { 00:05:00.590 "nbd_device": "/dev/nbd1", 00:05:00.590 "bdev_name": "Malloc1" 00:05:00.590 } 00:05:00.590 ]' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.590 { 00:05:00.590 "nbd_device": "/dev/nbd0", 00:05:00.590 "bdev_name": "Malloc0" 00:05:00.590 }, 00:05:00.590 { 00:05:00.590 "nbd_device": "/dev/nbd1", 00:05:00.590 "bdev_name": "Malloc1" 00:05:00.590 } 00:05:00.590 ]' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.590 /dev/nbd1' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.590 /dev/nbd1' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.590 256+0 records in 00:05:00.590 256+0 records out 00:05:00.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486258 s, 216 MB/s 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.590 256+0 records in 00:05:00.590 256+0 records out 00:05:00.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241144 s, 43.5 MB/s 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.590 256+0 records in 00:05:00.590 256+0 records out 00:05:00.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257394 s, 40.7 MB/s 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.590 17:47:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.591 17:47:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.848 17:47:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.106 17:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.363 17:47:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.363 17:47:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.620 17:47:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.877 [2024-07-24 17:47:48.145816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.135 [2024-07-24 17:47:48.263024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.135 [2024-07-24 17:47:48.263029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.135 [2024-07-24 17:47:48.318301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.135 [2024-07-24 17:47:48.318365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.660 17:47:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.660 17:47:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.660 spdk_app_start Round 2 00:05:04.660 17:47:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2663716 /var/tmp/spdk-nbd.sock 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2663716 ']' 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.660 17:47:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.917 17:47:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.917 17:47:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.917 17:47:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.175 Malloc0 00:05:05.175 17:47:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.433 Malloc1 00:05:05.433 17:47:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.433 17:47:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.691 /dev/nbd0 00:05:05.691 17:47:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.691 17:47:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.691 1+0 records in 00:05:05.691 1+0 records out 00:05:05.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169688 s, 24.1 MB/s 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.691 17:47:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.691 17:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.691 17:47:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.691 17:47:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.949 /dev/nbd1 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.949 1+0 records in 00:05:05.949 1+0 records out 00:05:05.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020218 s, 20.3 MB/s 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.949 17:47:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.949 17:47:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.207 { 00:05:06.207 "nbd_device": "/dev/nbd0", 00:05:06.207 "bdev_name": "Malloc0" 00:05:06.207 }, 00:05:06.207 { 00:05:06.207 "nbd_device": "/dev/nbd1", 00:05:06.207 "bdev_name": "Malloc1" 00:05:06.207 } 00:05:06.207 ]' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.207 { 00:05:06.207 "nbd_device": "/dev/nbd0", 00:05:06.207 "bdev_name": "Malloc0" 00:05:06.207 }, 00:05:06.207 { 00:05:06.207 "nbd_device": "/dev/nbd1", 00:05:06.207 "bdev_name": "Malloc1" 00:05:06.207 } 00:05:06.207 ]' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.207 /dev/nbd1' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.207 /dev/nbd1' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.207 256+0 records in 00:05:06.207 256+0 records out 00:05:06.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505332 s, 208 MB/s 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.207 17:47:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.463 256+0 records in 00:05:06.463 256+0 records out 00:05:06.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232646 s, 45.1 MB/s 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.463 256+0 records in 00:05:06.463 256+0 records out 00:05:06.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253938 s, 41.3 MB/s 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.463 17:47:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.720 17:47:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.978 17:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.235 17:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.235 17:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.236 17:47:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.236 17:47:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.492 17:47:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.750 [2024-07-24 17:47:53.913991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.007 [2024-07-24 17:47:54.031537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.007 [2024-07-24 17:47:54.031537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.007 [2024-07-24 17:47:54.091152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.007 [2024-07-24 17:47:54.091219] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.531 17:47:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2663716 /var/tmp/spdk-nbd.sock 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2663716 ']' 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.531 17:47:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:10.788 17:47:56 event.app_repeat -- event/event.sh@39 -- # killprocess 2663716 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2663716 ']' 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2663716 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2663716 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2663716' 00:05:10.788 killing process with pid 2663716 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2663716 00:05:10.788 17:47:56 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2663716 00:05:11.046 spdk_app_start is called in Round 0. 00:05:11.046 Shutdown signal received, stop current app iteration 00:05:11.046 Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 reinitialization... 00:05:11.046 spdk_app_start is called in Round 1. 00:05:11.046 Shutdown signal received, stop current app iteration 00:05:11.046 Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 reinitialization... 00:05:11.046 spdk_app_start is called in Round 2. 00:05:11.046 Shutdown signal received, stop current app iteration 00:05:11.046 Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 reinitialization... 00:05:11.046 spdk_app_start is called in Round 3. 00:05:11.046 Shutdown signal received, stop current app iteration 00:05:11.046 17:47:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.046 17:47:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.046 00:05:11.046 real 0m17.954s 00:05:11.046 user 0m38.793s 00:05:11.046 sys 0m3.235s 00:05:11.046 17:47:57 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.046 17:47:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.046 ************************************ 00:05:11.046 END TEST app_repeat 00:05:11.046 ************************************ 00:05:11.046 17:47:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.046 17:47:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.046 17:47:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.046 17:47:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.046 17:47:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.046 ************************************ 00:05:11.046 START TEST cpu_locks 00:05:11.046 ************************************ 00:05:11.046 17:47:57 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.046 * Looking for test storage... 00:05:11.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.046 17:47:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.046 17:47:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.046 17:47:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.046 17:47:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.046 17:47:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.046 17:47:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.046 17:47:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.046 ************************************ 00:05:11.046 START TEST default_locks 00:05:11.046 ************************************ 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2666159 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2666159 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2666159 ']' 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.046 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.305 [2024-07-24 17:47:57.344756] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:11.305 [2024-07-24 17:47:57.344848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666159 ] 00:05:11.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.305 [2024-07-24 17:47:57.401127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.305 [2024-07-24 17:47:57.513545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.563 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.563 17:47:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:11.563 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2666159 00:05:11.563 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2666159 00:05:11.563 17:47:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.820 lslocks: write error 00:05:11.820 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2666159 00:05:11.820 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2666159 ']' 00:05:11.820 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2666159 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666159 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666159' 00:05:12.078 killing process with pid 2666159 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2666159 00:05:12.078 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2666159 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2666159 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2666159 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2666159 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2666159 ']' 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2666159) - No such process 00:05:12.336 ERROR: process (pid: 2666159) is no longer running 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.336 00:05:12.336 real 0m1.290s 00:05:12.336 user 0m1.230s 00:05:12.336 sys 0m0.537s 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.336 17:47:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.336 ************************************ 00:05:12.336 END TEST default_locks 00:05:12.336 ************************************ 00:05:12.336 17:47:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.336 17:47:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.336 17:47:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.594 17:47:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.594 ************************************ 00:05:12.594 START TEST default_locks_via_rpc 00:05:12.594 ************************************ 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2666352 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2666352 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2666352 ']' 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.594 17:47:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.594 [2024-07-24 17:47:58.684000] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:12.594 [2024-07-24 17:47:58.684080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666352 ] 00:05:12.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.594 [2024-07-24 17:47:58.740750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.594 [2024-07-24 17:47:58.851184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.852 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.852 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.852 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.852 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.852 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2666352 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2666352 00:05:13.110 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2666352 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2666352 ']' 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2666352 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666352 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666352' 00:05:13.368 killing process with pid 2666352 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2666352 00:05:13.368 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2666352 00:05:13.634 00:05:13.634 real 0m1.240s 00:05:13.634 user 0m1.168s 00:05:13.634 sys 0m0.527s 00:05:13.634 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.634 17:47:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.634 ************************************ 00:05:13.634 END TEST default_locks_via_rpc 00:05:13.634 ************************************ 00:05:13.634 17:47:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.634 17:47:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.634 17:47:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.634 17:47:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.956 ************************************ 00:05:13.956 START TEST non_locking_app_on_locked_coremask 00:05:13.956 ************************************ 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2666515 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2666515 /var/tmp/spdk.sock 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2666515 ']' 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.956 17:47:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.956 [2024-07-24 17:47:59.974087] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:13.956 [2024-07-24 17:47:59.974194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666515 ] 00:05:13.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.956 [2024-07-24 17:48:00.037542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.956 [2024-07-24 17:48:00.156655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2666706 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2666706 /var/tmp/spdk2.sock 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2666706 ']' 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.888 17:48:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.888 [2024-07-24 17:48:00.978893] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:14.888 [2024-07-24 17:48:00.978985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666706 ] 00:05:14.888 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.888 [2024-07-24 17:48:01.066892] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.888 [2024-07-24 17:48:01.066934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.145 [2024-07-24 17:48:01.303889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.710 17:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.710 17:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.710 17:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2666515 00:05:15.711 17:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2666515 00:05:15.711 17:48:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.280 lslocks: write error 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2666515 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2666515 ']' 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2666515 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666515 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666515' 00:05:16.280 killing process with pid 2666515 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2666515 00:05:16.280 17:48:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2666515 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2666706 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2666706 ']' 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2666706 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2666706 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2666706' 00:05:17.217 killing process with pid 2666706 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2666706 00:05:17.217 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2666706 00:05:17.782 00:05:17.782 real 0m3.964s 00:05:17.782 user 0m4.313s 00:05:17.782 sys 0m1.126s 00:05:17.782 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.782 17:48:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.782 ************************************ 00:05:17.782 END TEST non_locking_app_on_locked_coremask 00:05:17.782 ************************************ 00:05:17.782 17:48:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.782 17:48:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.782 17:48:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.782 17:48:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.782 ************************************ 00:05:17.782 START TEST locking_app_on_unlocked_coremask 00:05:17.782 ************************************ 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2667072 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2667072 /var/tmp/spdk.sock 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2667072 ']' 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.782 17:48:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.782 [2024-07-24 17:48:03.988799] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:17.782 [2024-07-24 17:48:03.988893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667072 ] 00:05:17.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.040 [2024-07-24 17:48:04.057548] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.040 [2024-07-24 17:48:04.057589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.040 [2024-07-24 17:48:04.175541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2667197 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2667197 /var/tmp/spdk2.sock 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2667197 ']' 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.298 17:48:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.298 [2024-07-24 17:48:04.522631] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:18.298 [2024-07-24 17:48:04.522724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667197 ] 00:05:18.298 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.556 [2024-07-24 17:48:04.605576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.813 [2024-07-24 17:48:04.847387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.378 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.378 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.378 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2667197 00:05:19.378 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2667197 00:05:19.378 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.943 lslocks: write error 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2667072 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2667072 ']' 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2667072 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667072 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667072' 00:05:19.943 killing process with pid 2667072 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2667072 00:05:19.943 17:48:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2667072 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2667197 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2667197 ']' 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2667197 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667197 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667197' 00:05:20.877 killing process with pid 2667197 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2667197 00:05:20.877 17:48:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2667197 00:05:21.135 00:05:21.135 real 0m3.451s 00:05:21.135 user 0m3.572s 00:05:21.135 sys 0m1.110s 00:05:21.135 17:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.135 17:48:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.135 ************************************ 00:05:21.135 END TEST locking_app_on_unlocked_coremask 00:05:21.135 ************************************ 00:05:21.394 17:48:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.394 17:48:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.394 17:48:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.394 17:48:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.394 ************************************ 00:05:21.394 START TEST locking_app_on_locked_coremask 00:05:21.394 ************************************ 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2667664 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2667664 /var/tmp/spdk.sock 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2667664 ']' 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.394 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.394 [2024-07-24 17:48:07.484613] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:21.394 [2024-07-24 17:48:07.484705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667664 ] 00:05:21.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.394 [2024-07-24 17:48:07.547753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.653 [2024-07-24 17:48:07.669728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2667884 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2667884 /var/tmp/spdk2.sock 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2667884 /var/tmp/spdk2.sock 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2667884 /var/tmp/spdk2.sock 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2667884 ']' 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.911 17:48:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.911 [2024-07-24 17:48:07.986228] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:21.911 [2024-07-24 17:48:07.986315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667884 ] 00:05:21.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.911 [2024-07-24 17:48:08.078693] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2667664 has claimed it. 00:05:21.911 [2024-07-24 17:48:08.078744] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2667884) - No such process 00:05:22.477 ERROR: process (pid: 2667884) is no longer running 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2667664 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2667664 00:05:22.477 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.736 lslocks: write error 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2667664 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2667664 ']' 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2667664 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.736 17:48:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667664 00:05:22.994 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.994 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.994 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667664' 00:05:22.994 killing process with pid 2667664 00:05:22.994 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2667664 00:05:22.994 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2667664 00:05:23.253 00:05:23.253 real 0m2.027s 00:05:23.253 user 0m2.197s 00:05:23.253 sys 0m0.653s 00:05:23.253 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.253 17:48:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.253 ************************************ 00:05:23.253 END TEST locking_app_on_locked_coremask 00:05:23.253 ************************************ 00:05:23.253 17:48:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:23.253 17:48:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.253 17:48:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.253 17:48:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.253 ************************************ 00:05:23.253 START TEST locking_overlapped_coremask 00:05:23.253 ************************************ 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2668310 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2668310 /var/tmp/spdk.sock 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2668310 ']' 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.253 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 [2024-07-24 17:48:09.556876] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:23.511 [2024-07-24 17:48:09.556968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668310 ] 00:05:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.511 [2024-07-24 17:48:09.615789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.511 [2024-07-24 17:48:09.728253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.511 [2024-07-24 17:48:09.728309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.511 [2024-07-24 17:48:09.728312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2668442 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2668442 /var/tmp/spdk2.sock 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2668442 /var/tmp/spdk2.sock 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2668442 /var/tmp/spdk2.sock 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2668442 ']' 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.770 17:48:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.028 [2024-07-24 17:48:10.042548] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:24.028 [2024-07-24 17:48:10.042639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668442 ] 00:05:24.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.028 [2024-07-24 17:48:10.130914] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2668310 has claimed it. 00:05:24.028 [2024-07-24 17:48:10.130982] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2668442) - No such process 00:05:24.593 ERROR: process (pid: 2668442) is no longer running 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2668310 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2668310 ']' 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2668310 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:24.593 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668310 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668310' 00:05:24.594 killing process with pid 2668310 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2668310 00:05:24.594 17:48:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2668310 00:05:25.161 00:05:25.161 real 0m1.697s 00:05:25.161 user 0m4.486s 00:05:25.161 sys 0m0.458s 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.161 ************************************ 00:05:25.161 END TEST locking_overlapped_coremask 00:05:25.161 ************************************ 00:05:25.161 17:48:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:25.161 17:48:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.161 17:48:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.161 17:48:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.161 ************************************ 00:05:25.161 START TEST locking_overlapped_coremask_via_rpc 00:05:25.161 ************************************ 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2668604 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2668604 /var/tmp/spdk.sock 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2668604 ']' 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.161 17:48:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.161 [2024-07-24 17:48:11.305233] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:25.161 [2024-07-24 17:48:11.305317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668604 ] 00:05:25.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.161 [2024-07-24 17:48:11.366521] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.161 [2024-07-24 17:48:11.366565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.420 [2024-07-24 17:48:11.484653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.420 [2024-07-24 17:48:11.484704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.420 [2024-07-24 17:48:11.484722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2668744 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2668744 /var/tmp/spdk2.sock 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2668744 ']' 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.985 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.986 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.986 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.986 17:48:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.244 [2024-07-24 17:48:12.276548] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:26.244 [2024-07-24 17:48:12.276630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668744 ] 00:05:26.244 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.244 [2024-07-24 17:48:12.362504] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.244 [2024-07-24 17:48:12.362543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.503 [2024-07-24 17:48:12.587513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.503 [2024-07-24 17:48:12.587578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.503 [2024-07-24 17:48:12.587581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.069 [2024-07-24 17:48:13.234205] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2668604 has claimed it. 00:05:27.069 request: 00:05:27.069 { 00:05:27.069 "method": "framework_enable_cpumask_locks", 00:05:27.069 "req_id": 1 00:05:27.069 } 00:05:27.069 Got JSON-RPC error response 00:05:27.069 response: 00:05:27.069 { 00:05:27.069 "code": -32603, 00:05:27.069 "message": "Failed to claim CPU core: 2" 00:05:27.069 } 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2668604 /var/tmp/spdk.sock 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2668604 ']' 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.069 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2668744 /var/tmp/spdk2.sock 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2668744 ']' 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.327 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.584 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.585 00:05:27.585 real 0m2.480s 00:05:27.585 user 0m1.201s 00:05:27.585 sys 0m0.199s 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.585 17:48:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.585 ************************************ 00:05:27.585 END TEST locking_overlapped_coremask_via_rpc 00:05:27.585 ************************************ 00:05:27.585 17:48:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.585 17:48:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2668604 ]] 00:05:27.585 17:48:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2668604 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2668604 ']' 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2668604 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668604 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668604' 00:05:27.585 killing process with pid 2668604 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2668604 00:05:27.585 17:48:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2668604 00:05:28.151 17:48:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2668744 ]] 00:05:28.151 17:48:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2668744 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2668744 ']' 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2668744 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2668744 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2668744' 00:05:28.151 killing process with pid 2668744 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2668744 00:05:28.151 17:48:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2668744 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2668604 ]] 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2668604 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2668604 ']' 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2668604 00:05:28.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2668604) - No such process 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2668604 is not found' 00:05:28.718 Process with pid 2668604 is not found 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2668744 ]] 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2668744 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2668744 ']' 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2668744 00:05:28.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2668744) - No such process 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2668744 is not found' 00:05:28.718 Process with pid 2668744 is not found 00:05:28.718 17:48:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.718 00:05:28.718 real 0m17.499s 00:05:28.718 user 0m30.570s 00:05:28.718 sys 0m5.503s 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.718 17:48:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.718 ************************************ 00:05:28.718 END TEST cpu_locks 00:05:28.718 ************************************ 00:05:28.718 00:05:28.718 real 0m41.519s 00:05:28.718 user 1m18.443s 00:05:28.718 sys 0m9.543s 00:05:28.718 17:48:14 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.718 17:48:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.718 ************************************ 00:05:28.718 END TEST event 00:05:28.718 ************************************ 00:05:28.718 17:48:14 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.718 17:48:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.718 17:48:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.718 17:48:14 -- common/autotest_common.sh@10 -- # set +x 00:05:28.718 ************************************ 00:05:28.718 START TEST thread 00:05:28.718 ************************************ 00:05:28.718 17:48:14 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.718 * Looking for test storage... 00:05:28.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:28.718 17:48:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.718 17:48:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:28.718 17:48:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.718 17:48:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.718 ************************************ 00:05:28.718 START TEST thread_poller_perf 00:05:28.718 ************************************ 00:05:28.718 17:48:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.718 [2024-07-24 17:48:14.872973] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:28.718 [2024-07-24 17:48:14.873045] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669113 ] 00:05:28.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.718 [2024-07-24 17:48:14.936474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.976 [2024-07-24 17:48:15.056223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.976 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.349 ====================================== 00:05:30.349 busy:2709291463 (cyc) 00:05:30.349 total_run_count: 291000 00:05:30.349 tsc_hz: 2700000000 (cyc) 00:05:30.349 ====================================== 00:05:30.349 poller_cost: 9310 (cyc), 3448 (nsec) 00:05:30.349 00:05:30.349 real 0m1.324s 00:05:30.349 user 0m1.228s 00:05:30.349 sys 0m0.090s 00:05:30.350 17:48:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.350 17:48:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.350 ************************************ 00:05:30.350 END TEST thread_poller_perf 00:05:30.350 ************************************ 00:05:30.350 17:48:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.350 17:48:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:30.350 17:48:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.350 17:48:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.350 ************************************ 00:05:30.350 START TEST thread_poller_perf 00:05:30.350 ************************************ 00:05:30.350 17:48:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.350 [2024-07-24 17:48:16.244701] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:30.350 [2024-07-24 17:48:16.244772] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669269 ] 00:05:30.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.350 [2024-07-24 17:48:16.307505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.350 [2024-07-24 17:48:16.432260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.350 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.330 ====================================== 00:05:31.330 busy:2702983321 (cyc) 00:05:31.330 total_run_count: 3862000 00:05:31.330 tsc_hz: 2700000000 (cyc) 00:05:31.330 ====================================== 00:05:31.330 poller_cost: 699 (cyc), 258 (nsec) 00:05:31.330 00:05:31.330 real 0m1.325s 00:05:31.330 user 0m1.227s 00:05:31.330 sys 0m0.092s 00:05:31.330 17:48:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.330 17:48:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.330 ************************************ 00:05:31.330 END TEST thread_poller_perf 00:05:31.330 ************************************ 00:05:31.330 17:48:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.330 00:05:31.330 real 0m2.794s 00:05:31.330 user 0m2.513s 00:05:31.330 sys 0m0.281s 00:05:31.330 17:48:17 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.330 17:48:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.330 ************************************ 00:05:31.330 END TEST thread 00:05:31.330 ************************************ 00:05:31.589 17:48:17 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.589 17:48:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.589 17:48:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.589 17:48:17 -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 ************************************ 00:05:31.589 START TEST accel 00:05:31.589 ************************************ 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.589 * Looking for test storage... 00:05:31.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:31.589 17:48:17 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:31.589 17:48:17 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:31.589 17:48:17 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.589 17:48:17 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2669590 00:05:31.589 17:48:17 accel -- accel/accel.sh@63 -- # waitforlisten 2669590 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@829 -- # '[' -z 2669590 ']' 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.589 17:48:17 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.589 17:48:17 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.589 17:48:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.589 17:48:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.590 17:48:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.590 17:48:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.590 17:48:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.590 17:48:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.590 17:48:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:31.590 17:48:17 accel -- accel/accel.sh@41 -- # jq -r . 00:05:31.590 [2024-07-24 17:48:17.714490] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:31.590 [2024-07-24 17:48:17.714578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669590 ] 00:05:31.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.590 [2024-07-24 17:48:17.774024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.848 [2024-07-24 17:48:17.894021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.414 17:48:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.414 17:48:18 accel -- common/autotest_common.sh@862 -- # return 0 00:05:32.414 17:48:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:32.414 17:48:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:32.414 17:48:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:32.414 17:48:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:32.414 17:48:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:32.414 17:48:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:32.672 17:48:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.672 17:48:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:32.672 17:48:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.672 17:48:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.672 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.672 17:48:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.672 17:48:18 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.673 17:48:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.673 17:48:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.673 17:48:18 accel -- accel/accel.sh@75 -- # killprocess 2669590 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@948 -- # '[' -z 2669590 ']' 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@952 -- # kill -0 2669590 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@953 -- # uname 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2669590 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2669590' 00:05:32.673 killing process with pid 2669590 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@967 -- # kill 2669590 00:05:32.673 17:48:18 accel -- common/autotest_common.sh@972 -- # wait 2669590 00:05:33.239 17:48:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:33.239 17:48:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.239 17:48:19 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:33.239 17:48:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:33.239 17:48:19 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.239 17:48:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:33.239 17:48:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.239 17:48:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.239 ************************************ 00:05:33.239 START TEST accel_missing_filename 00:05:33.239 ************************************ 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.239 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:33.239 17:48:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:33.239 [2024-07-24 17:48:19.356662] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:33.239 [2024-07-24 17:48:19.356725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669761 ] 00:05:33.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.239 [2024-07-24 17:48:19.417539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.497 [2024-07-24 17:48:19.540363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.497 [2024-07-24 17:48:19.600941] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.497 [2024-07-24 17:48:19.685407] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:33.755 A filename is required. 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.755 00:05:33.755 real 0m0.472s 00:05:33.755 user 0m0.362s 00:05:33.755 sys 0m0.144s 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.755 17:48:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:33.755 ************************************ 00:05:33.755 END TEST accel_missing_filename 00:05:33.755 ************************************ 00:05:33.755 17:48:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.755 17:48:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:33.755 17:48:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.755 17:48:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.755 ************************************ 00:05:33.755 START TEST accel_compress_verify 00:05:33.755 ************************************ 00:05:33.755 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.755 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:33.755 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.755 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.755 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.756 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.756 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.756 17:48:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.756 17:48:19 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.756 [2024-07-24 17:48:19.872094] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:33.756 [2024-07-24 17:48:19.872192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669870 ] 00:05:33.756 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.756 [2024-07-24 17:48:19.934040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.014 [2024-07-24 17:48:20.053931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.014 [2024-07-24 17:48:20.117911] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.014 [2024-07-24 17:48:20.208754] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:34.273 00:05:34.273 Compression does not support the verify option, aborting. 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.273 00:05:34.273 real 0m0.481s 00:05:34.273 user 0m0.367s 00:05:34.273 sys 0m0.148s 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.273 17:48:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 ************************************ 00:05:34.273 END TEST accel_compress_verify 00:05:34.273 ************************************ 00:05:34.273 17:48:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 ************************************ 00:05:34.273 START TEST accel_wrong_workload 00:05:34.273 ************************************ 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:34.273 17:48:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:34.273 Unsupported workload type: foobar 00:05:34.273 [2024-07-24 17:48:20.399449] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:34.273 accel_perf options: 00:05:34.273 [-h help message] 00:05:34.273 [-q queue depth per core] 00:05:34.273 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.273 [-T number of threads per core 00:05:34.273 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.273 [-t time in seconds] 00:05:34.273 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.273 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.273 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.273 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.273 [-S for crc32c workload, use this seed value (default 0) 00:05:34.273 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.273 [-f for fill workload, use this BYTE value (default 255) 00:05:34.273 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.273 [-y verify result if this switch is on] 00:05:34.273 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.273 Can be used to spread operations across a wider range of memory. 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.273 00:05:34.273 real 0m0.024s 00:05:34.273 user 0m0.018s 00:05:34.273 sys 0m0.007s 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.273 17:48:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 ************************************ 00:05:34.273 END TEST accel_wrong_workload 00:05:34.273 ************************************ 00:05:34.273 Error: writing output failed: Broken pipe 00:05:34.273 17:48:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.273 17:48:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 ************************************ 00:05:34.273 START TEST accel_negative_buffers 00:05:34.273 ************************************ 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.273 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:34.273 17:48:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:34.273 -x option must be non-negative. 00:05:34.274 [2024-07-24 17:48:20.465333] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:34.274 accel_perf options: 00:05:34.274 [-h help message] 00:05:34.274 [-q queue depth per core] 00:05:34.274 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.274 [-T number of threads per core 00:05:34.274 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.274 [-t time in seconds] 00:05:34.274 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.274 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.274 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.274 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.274 [-S for crc32c workload, use this seed value (default 0) 00:05:34.274 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.274 [-f for fill workload, use this BYTE value (default 255) 00:05:34.274 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.274 [-y verify result if this switch is on] 00:05:34.274 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.274 Can be used to spread operations across a wider range of memory. 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.274 00:05:34.274 real 0m0.022s 00:05:34.274 user 0m0.013s 00:05:34.274 sys 0m0.009s 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.274 17:48:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:34.274 ************************************ 00:05:34.274 END TEST accel_negative_buffers 00:05:34.274 ************************************ 00:05:34.274 Error: writing output failed: Broken pipe 00:05:34.274 17:48:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:34.274 17:48:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:34.274 17:48:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.274 17:48:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.274 ************************************ 00:05:34.274 START TEST accel_crc32c 00:05:34.274 ************************************ 00:05:34.274 17:48:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:34.274 17:48:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:34.274 [2024-07-24 17:48:20.524002] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:34.274 [2024-07-24 17:48:20.524069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669974 ] 00:05:34.532 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.532 [2024-07-24 17:48:20.586924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.532 [2024-07-24 17:48:20.709629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.532 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.532 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.532 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.532 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.532 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.533 17:48:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:35.906 17:48:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.906 00:05:35.906 real 0m1.478s 00:05:35.906 user 0m1.343s 00:05:35.906 sys 0m0.144s 00:05:35.906 17:48:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.906 17:48:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 ************************************ 00:05:35.906 END TEST accel_crc32c 00:05:35.906 ************************************ 00:05:35.906 17:48:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:35.906 17:48:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:35.906 17:48:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.906 17:48:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 ************************************ 00:05:35.906 START TEST accel_crc32c_C2 00:05:35.906 ************************************ 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.906 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:35.906 [2024-07-24 17:48:22.049986] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:35.906 [2024-07-24 17:48:22.050049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670136 ] 00:05:35.906 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.906 [2024-07-24 17:48:22.114155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.164 [2024-07-24 17:48:22.233211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.164 17:48:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.537 00:05:37.537 real 0m1.473s 00:05:37.537 user 0m1.333s 00:05:37.537 sys 0m0.148s 00:05:37.537 17:48:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.538 17:48:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:37.538 ************************************ 00:05:37.538 END TEST accel_crc32c_C2 00:05:37.538 ************************************ 00:05:37.538 17:48:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:37.538 17:48:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.538 17:48:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.538 17:48:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.538 ************************************ 00:05:37.538 START TEST accel_copy 00:05:37.538 ************************************ 00:05:37.538 17:48:23 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:37.538 17:48:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:37.538 [2024-07-24 17:48:23.570321] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:37.538 [2024-07-24 17:48:23.570383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670409 ] 00:05:37.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.538 [2024-07-24 17:48:23.631493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.538 [2024-07-24 17:48:23.753590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.795 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.796 17:48:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.169 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:39.170 17:48:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.170 00:05:39.170 real 0m1.489s 00:05:39.170 user 0m1.350s 00:05:39.170 sys 0m0.147s 00:05:39.170 17:48:25 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.170 17:48:25 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:39.170 ************************************ 00:05:39.170 END TEST accel_copy 00:05:39.170 ************************************ 00:05:39.170 17:48:25 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.170 17:48:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:39.170 17:48:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.170 17:48:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.170 ************************************ 00:05:39.170 START TEST accel_fill 00:05:39.170 ************************************ 00:05:39.170 17:48:25 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:39.170 [2024-07-24 17:48:25.104985] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:39.170 [2024-07-24 17:48:25.105047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670566 ] 00:05:39.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.170 [2024-07-24 17:48:25.166293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.170 [2024-07-24 17:48:25.287595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.170 17:48:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.543 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:40.544 17:48:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.544 00:05:40.544 real 0m1.482s 00:05:40.544 user 0m1.344s 00:05:40.544 sys 0m0.145s 00:05:40.544 17:48:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.544 17:48:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:40.544 ************************************ 00:05:40.544 END TEST accel_fill 00:05:40.544 ************************************ 00:05:40.544 17:48:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:40.544 17:48:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.544 17:48:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.544 17:48:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.544 ************************************ 00:05:40.544 START TEST accel_copy_crc32c 00:05:40.544 ************************************ 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:40.544 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:40.544 [2024-07-24 17:48:26.634021] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:40.544 [2024-07-24 17:48:26.634083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670736 ] 00:05:40.544 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.544 [2024-07-24 17:48:26.697292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.544 [2024-07-24 17:48:26.811766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.802 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.803 17:48:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.176 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.177 00:05:42.177 real 0m1.465s 00:05:42.177 user 0m1.320s 00:05:42.177 sys 0m0.147s 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.177 17:48:28 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 ************************************ 00:05:42.177 END TEST accel_copy_crc32c 00:05:42.177 ************************************ 00:05:42.177 17:48:28 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.177 17:48:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:42.177 17:48:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.177 17:48:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 ************************************ 00:05:42.177 START TEST accel_copy_crc32c_C2 00:05:42.177 ************************************ 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.177 [2024-07-24 17:48:28.141002] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:42.177 [2024-07-24 17:48:28.141064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671001 ] 00:05:42.177 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.177 [2024-07-24 17:48:28.203274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.177 [2024-07-24 17:48:28.324680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.177 17:48:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.551 00:05:43.551 real 0m1.489s 00:05:43.551 user 0m1.337s 00:05:43.551 sys 0m0.154s 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.551 17:48:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:43.551 ************************************ 00:05:43.551 END TEST accel_copy_crc32c_C2 00:05:43.551 ************************************ 00:05:43.551 17:48:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:43.551 17:48:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:43.551 17:48:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.551 17:48:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.551 ************************************ 00:05:43.551 START TEST accel_dualcast 00:05:43.551 ************************************ 00:05:43.551 17:48:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:43.551 17:48:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:43.551 [2024-07-24 17:48:29.677494] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:43.551 [2024-07-24 17:48:29.677558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671154 ] 00:05:43.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.551 [2024-07-24 17:48:29.739366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.809 [2024-07-24 17:48:29.861563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.809 17:48:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:45.183 17:48:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.183 00:05:45.183 real 0m1.481s 00:05:45.183 user 0m1.331s 00:05:45.183 sys 0m0.151s 00:05:45.183 17:48:31 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.183 17:48:31 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 ************************************ 00:05:45.183 END TEST accel_dualcast 00:05:45.183 ************************************ 00:05:45.183 17:48:31 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:45.183 17:48:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:45.183 17:48:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.183 17:48:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 ************************************ 00:05:45.183 START TEST accel_compare 00:05:45.183 ************************************ 00:05:45.183 17:48:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:45.183 [2024-07-24 17:48:31.205286] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:45.183 [2024-07-24 17:48:31.205347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671339 ] 00:05:45.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.183 [2024-07-24 17:48:31.270434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.183 [2024-07-24 17:48:31.391324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.183 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.441 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.442 17:48:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:46.813 17:48:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.813 00:05:46.813 real 0m1.477s 00:05:46.813 user 0m1.334s 00:05:46.813 sys 0m0.146s 00:05:46.813 17:48:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.813 17:48:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:46.814 ************************************ 00:05:46.814 END TEST accel_compare 00:05:46.814 ************************************ 00:05:46.814 17:48:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:46.814 17:48:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.814 17:48:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.814 17:48:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.814 ************************************ 00:05:46.814 START TEST accel_xor 00:05:46.814 ************************************ 00:05:46.814 17:48:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:46.814 [2024-07-24 17:48:32.725640] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:46.814 [2024-07-24 17:48:32.725705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671589 ] 00:05:46.814 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.814 [2024-07-24 17:48:32.787007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.814 [2024-07-24 17:48:32.913247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.814 17:48:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.188 17:48:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.188 00:05:48.188 real 0m1.497s 00:05:48.188 user 0m1.349s 00:05:48.188 sys 0m0.150s 00:05:48.188 17:48:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.189 17:48:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.189 ************************************ 00:05:48.189 END TEST accel_xor 00:05:48.189 ************************************ 00:05:48.189 17:48:34 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:48.189 17:48:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.189 17:48:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.189 17:48:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.189 ************************************ 00:05:48.189 START TEST accel_xor 00:05:48.189 ************************************ 00:05:48.189 17:48:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:48.189 17:48:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:48.189 [2024-07-24 17:48:34.265418] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:48.189 [2024-07-24 17:48:34.265482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671746 ] 00:05:48.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.189 [2024-07-24 17:48:34.326274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.189 [2024-07-24 17:48:34.451078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.447 17:48:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:49.820 17:48:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.820 00:05:49.820 real 0m1.484s 00:05:49.820 user 0m1.338s 00:05:49.820 sys 0m0.149s 00:05:49.820 17:48:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.820 17:48:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 ************************************ 00:05:49.820 END TEST accel_xor 00:05:49.820 ************************************ 00:05:49.820 17:48:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:49.820 17:48:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.820 17:48:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.820 17:48:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.820 ************************************ 00:05:49.820 START TEST accel_dif_verify 00:05:49.820 ************************************ 00:05:49.820 17:48:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:49.820 17:48:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:49.820 [2024-07-24 17:48:35.796985] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:49.820 [2024-07-24 17:48:35.797051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671982 ] 00:05:49.820 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.820 [2024-07-24 17:48:35.858585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.820 [2024-07-24 17:48:35.980637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.820 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.821 17:48:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.249 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:51.250 17:48:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.250 00:05:51.250 real 0m1.476s 00:05:51.250 user 0m1.337s 00:05:51.250 sys 0m0.144s 00:05:51.250 17:48:37 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.250 17:48:37 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.250 ************************************ 00:05:51.250 END TEST accel_dif_verify 00:05:51.250 ************************************ 00:05:51.250 17:48:37 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:51.250 17:48:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:51.250 17:48:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.250 17:48:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.250 ************************************ 00:05:51.250 START TEST accel_dif_generate 00:05:51.250 ************************************ 00:05:51.250 17:48:37 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:51.250 17:48:37 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:51.250 [2024-07-24 17:48:37.321908] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:51.250 [2024-07-24 17:48:37.321971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672181 ] 00:05:51.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.250 [2024-07-24 17:48:37.386118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.510 [2024-07-24 17:48:37.509880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.510 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.511 17:48:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:52.882 17:48:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.882 00:05:52.882 real 0m1.498s 00:05:52.882 user 0m1.347s 00:05:52.882 sys 0m0.155s 00:05:52.882 17:48:38 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.882 17:48:38 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:52.882 ************************************ 00:05:52.882 END TEST accel_dif_generate 00:05:52.882 ************************************ 00:05:52.882 17:48:38 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:52.882 17:48:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:52.882 17:48:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.882 17:48:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.882 ************************************ 00:05:52.882 START TEST accel_dif_generate_copy 00:05:52.882 ************************************ 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:52.882 17:48:38 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:52.882 [2024-07-24 17:48:38.864056] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:52.882 [2024-07-24 17:48:38.864132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672336 ] 00:05:52.882 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.882 [2024-07-24 17:48:38.927490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.882 [2024-07-24 17:48:39.050924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:52.882 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.883 17:48:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.255 00:05:54.255 real 0m1.483s 00:05:54.255 user 0m1.340s 00:05:54.255 sys 0m0.145s 00:05:54.255 17:48:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.256 17:48:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:54.256 ************************************ 00:05:54.256 END TEST accel_dif_generate_copy 00:05:54.256 ************************************ 00:05:54.256 17:48:40 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:54.256 17:48:40 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.256 17:48:40 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:54.256 17:48:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.256 17:48:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.256 ************************************ 00:05:54.256 START TEST accel_comp 00:05:54.256 ************************************ 00:05:54.256 17:48:40 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:54.256 17:48:40 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:54.256 [2024-07-24 17:48:40.392882] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:54.256 [2024-07-24 17:48:40.392949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672614 ] 00:05:54.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.256 [2024-07-24 17:48:40.454732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.515 [2024-07-24 17:48:40.581193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:54.515 17:48:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:55.888 17:48:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.888 00:05:55.888 real 0m1.502s 00:05:55.888 user 0m1.355s 00:05:55.888 sys 0m0.151s 00:05:55.888 17:48:41 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.888 17:48:41 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:55.888 ************************************ 00:05:55.888 END TEST accel_comp 00:05:55.888 ************************************ 00:05:55.888 17:48:41 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.888 17:48:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:55.888 17:48:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.888 17:48:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.888 ************************************ 00:05:55.888 START TEST accel_decomp 00:05:55.888 ************************************ 00:05:55.888 17:48:41 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:55.888 17:48:41 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:55.888 [2024-07-24 17:48:41.937674] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:55.889 [2024-07-24 17:48:41.937742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672772 ] 00:05:55.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.889 [2024-07-24 17:48:41.999924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.889 [2024-07-24 17:48:42.124470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.147 17:48:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.521 17:48:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.521 00:05:57.521 real 0m1.500s 00:05:57.521 user 0m1.353s 00:05:57.521 sys 0m0.150s 00:05:57.521 17:48:43 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.521 17:48:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:57.521 ************************************ 00:05:57.521 END TEST accel_decomp 00:05:57.521 ************************************ 00:05:57.521 17:48:43 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.521 17:48:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:57.521 17:48:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.521 17:48:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.521 ************************************ 00:05:57.521 START TEST accel_decomp_full 00:05:57.521 ************************************ 00:05:57.521 17:48:43 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:57.521 [2024-07-24 17:48:43.480019] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:57.521 [2024-07-24 17:48:43.480085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672929 ] 00:05:57.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.521 [2024-07-24 17:48:43.545508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.521 [2024-07-24 17:48:43.671092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.521 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:57.522 17:48:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.895 17:48:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.895 00:05:58.895 real 0m1.504s 00:05:58.895 user 0m1.353s 00:05:58.895 sys 0m0.153s 00:05:58.895 17:48:44 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.895 17:48:44 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 END TEST accel_decomp_full 00:05:58.895 ************************************ 00:05:58.895 17:48:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.895 17:48:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:58.895 17:48:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.895 17:48:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 START TEST accel_decomp_mcore 00:05:58.895 ************************************ 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:58.895 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:58.895 [2024-07-24 17:48:45.030056] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:05:58.895 [2024-07-24 17:48:45.030130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673208 ] 00:05:58.895 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.895 [2024-07-24 17:48:45.092353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.154 [2024-07-24 17:48:45.219434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.154 [2024-07-24 17:48:45.219484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.154 [2024-07-24 17:48:45.219536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.154 [2024-07-24 17:48:45.219539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.154 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.155 17:48:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.528 00:06:00.528 real 0m1.488s 00:06:00.528 user 0m4.785s 00:06:00.528 sys 0m0.145s 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.528 17:48:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:00.528 ************************************ 00:06:00.528 END TEST accel_decomp_mcore 00:06:00.528 ************************************ 00:06:00.528 17:48:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.529 17:48:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:00.529 17:48:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.529 17:48:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.529 ************************************ 00:06:00.529 START TEST accel_decomp_full_mcore 00:06:00.529 ************************************ 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:00.529 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:00.529 [2024-07-24 17:48:46.568025] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:00.529 [2024-07-24 17:48:46.568090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673367 ] 00:06:00.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.529 [2024-07-24 17:48:46.631023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.529 [2024-07-24 17:48:46.757683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.529 [2024-07-24 17:48:46.757733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.529 [2024-07-24 17:48:46.757787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.529 [2024-07-24 17:48:46.757790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.789 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.790 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.790 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:00.790 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.790 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.790 17:48:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.164 00:06:02.164 real 0m1.523s 00:06:02.164 user 0m4.900s 00:06:02.164 sys 0m0.154s 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.164 17:48:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:02.164 ************************************ 00:06:02.164 END TEST accel_decomp_full_mcore 00:06:02.164 ************************************ 00:06:02.164 17:48:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.164 17:48:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.164 17:48:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.164 17:48:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.164 ************************************ 00:06:02.164 START TEST accel_decomp_mthread 00:06:02.164 ************************************ 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:02.164 [2024-07-24 17:48:48.140587] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:02.164 [2024-07-24 17:48:48.140652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673529 ] 00:06:02.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.164 [2024-07-24 17:48:48.202225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.164 [2024-07-24 17:48:48.327414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.164 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.165 17:48:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.538 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.539 00:06:03.539 real 0m1.501s 00:06:03.539 user 0m1.353s 00:06:03.539 sys 0m0.150s 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.539 17:48:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:03.539 ************************************ 00:06:03.539 END TEST accel_decomp_mthread 00:06:03.539 ************************************ 00:06:03.539 17:48:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:03.539 17:48:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:03.539 17:48:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.539 17:48:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.539 ************************************ 00:06:03.539 START TEST accel_decomp_full_mthread 00:06:03.539 ************************************ 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:03.539 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:03.539 [2024-07-24 17:48:49.691291] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:03.539 [2024-07-24 17:48:49.691358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673800 ] 00:06:03.539 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.539 [2024-07-24 17:48:49.752192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.798 [2024-07-24 17:48:49.873423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.798 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.799 17:48:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.172 00:06:05.172 real 0m1.532s 00:06:05.172 user 0m1.396s 00:06:05.172 sys 0m0.139s 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.172 17:48:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:05.172 ************************************ 00:06:05.172 END TEST accel_decomp_full_mthread 00:06:05.172 ************************************ 00:06:05.172 17:48:51 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:05.172 17:48:51 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:05.172 17:48:51 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:05.172 17:48:51 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:05.172 17:48:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.172 17:48:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.172 17:48:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.172 17:48:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.172 17:48:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.172 17:48:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.172 17:48:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.172 17:48:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:05.172 17:48:51 accel -- accel/accel.sh@41 -- # jq -r . 00:06:05.172 ************************************ 00:06:05.172 START TEST accel_dif_functional_tests 00:06:05.172 ************************************ 00:06:05.172 17:48:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:05.172 [2024-07-24 17:48:51.296436] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:05.172 [2024-07-24 17:48:51.296506] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673965 ] 00:06:05.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.172 [2024-07-24 17:48:51.361962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.430 [2024-07-24 17:48:51.489380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.430 [2024-07-24 17:48:51.489433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.430 [2024-07-24 17:48:51.489436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.430 00:06:05.430 00:06:05.430 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.430 http://cunit.sourceforge.net/ 00:06:05.430 00:06:05.430 00:06:05.430 Suite: accel_dif 00:06:05.430 Test: verify: DIF generated, GUARD check ...passed 00:06:05.430 Test: verify: DIF generated, APPTAG check ...passed 00:06:05.430 Test: verify: DIF generated, REFTAG check ...passed 00:06:05.430 Test: verify: DIF not generated, GUARD check ...[2024-07-24 17:48:51.593295] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:05.430 passed 00:06:05.430 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 17:48:51.593369] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:05.430 passed 00:06:05.430 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 17:48:51.593406] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:05.430 passed 00:06:05.430 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:05.430 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 17:48:51.593479] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:05.430 passed 00:06:05.430 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:05.430 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:05.430 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:05.430 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 17:48:51.593635] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:05.430 passed 00:06:05.430 Test: verify copy: DIF generated, GUARD check ...passed 00:06:05.430 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:05.430 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:05.430 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 17:48:51.593823] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:05.430 passed 00:06:05.430 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 17:48:51.593866] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:05.430 passed 00:06:05.430 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 17:48:51.593906] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:05.430 passed 00:06:05.430 Test: generate copy: DIF generated, GUARD check ...passed 00:06:05.430 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:05.430 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:05.430 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:05.430 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:05.430 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:05.430 Test: generate copy: iovecs-len validate ...[2024-07-24 17:48:51.594170] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:05.430 passed 00:06:05.430 Test: generate copy: buffer alignment validate ...passed 00:06:05.430 00:06:05.430 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.430 suites 1 1 n/a 0 0 00:06:05.430 tests 26 26 26 0 0 00:06:05.430 asserts 115 115 115 0 n/a 00:06:05.430 00:06:05.430 Elapsed time = 0.005 seconds 00:06:05.689 00:06:05.689 real 0m0.604s 00:06:05.689 user 0m0.902s 00:06:05.689 sys 0m0.188s 00:06:05.689 17:48:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.689 17:48:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:05.689 ************************************ 00:06:05.689 END TEST accel_dif_functional_tests 00:06:05.689 ************************************ 00:06:05.689 00:06:05.689 real 0m34.256s 00:06:05.689 user 0m37.935s 00:06:05.689 sys 0m4.676s 00:06:05.689 17:48:51 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.689 17:48:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.689 ************************************ 00:06:05.690 END TEST accel 00:06:05.690 ************************************ 00:06:05.690 17:48:51 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:05.690 17:48:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.690 17:48:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.690 17:48:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.690 ************************************ 00:06:05.690 START TEST accel_rpc 00:06:05.690 ************************************ 00:06:05.690 17:48:51 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:05.947 * Looking for test storage... 00:06:05.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:05.948 17:48:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.948 17:48:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2674146 00:06:05.948 17:48:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:05.948 17:48:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2674146 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2674146 ']' 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.948 17:48:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.948 [2024-07-24 17:48:52.035075] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:05.948 [2024-07-24 17:48:52.035190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674146 ] 00:06:05.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.948 [2024-07-24 17:48:52.093841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.948 [2024-07-24 17:48:52.202365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.206 17:48:52 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.206 17:48:52 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.206 17:48:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:06.206 17:48:52 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:06.206 17:48:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:06.206 17:48:52 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:06.206 17:48:52 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:06.206 17:48:52 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.206 17:48:52 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.206 17:48:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 ************************************ 00:06:06.206 START TEST accel_assign_opcode 00:06:06.206 ************************************ 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 [2024-07-24 17:48:52.266990] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 [2024-07-24 17:48:52.275003] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.206 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.464 software 00:06:06.464 00:06:06.464 real 0m0.315s 00:06:06.464 user 0m0.039s 00:06:06.464 sys 0m0.007s 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.464 17:48:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:06.464 ************************************ 00:06:06.464 END TEST accel_assign_opcode 00:06:06.464 ************************************ 00:06:06.464 17:48:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2674146 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2674146 ']' 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2674146 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674146 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674146' 00:06:06.464 killing process with pid 2674146 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@967 -- # kill 2674146 00:06:06.464 17:48:52 accel_rpc -- common/autotest_common.sh@972 -- # wait 2674146 00:06:07.029 00:06:07.029 real 0m1.169s 00:06:07.029 user 0m1.085s 00:06:07.029 sys 0m0.440s 00:06:07.029 17:48:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.029 17:48:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 ************************************ 00:06:07.029 END TEST accel_rpc 00:06:07.029 ************************************ 00:06:07.029 17:48:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.029 17:48:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.029 17:48:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.029 17:48:53 -- common/autotest_common.sh@10 -- # set +x 00:06:07.029 ************************************ 00:06:07.029 START TEST app_cmdline 00:06:07.029 ************************************ 00:06:07.029 17:48:53 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:07.029 * Looking for test storage... 00:06:07.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:07.029 17:48:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.029 17:48:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2674357 00:06:07.029 17:48:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.029 17:48:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2674357 00:06:07.029 17:48:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2674357 ']' 00:06:07.030 17:48:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.030 17:48:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.030 17:48:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.030 17:48:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.030 17:48:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.030 [2024-07-24 17:48:53.252729] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:07.030 [2024-07-24 17:48:53.252822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674357 ] 00:06:07.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.287 [2024-07-24 17:48:53.310491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.287 [2024-07-24 17:48:53.417289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.545 17:48:53 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.545 17:48:53 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:07.545 17:48:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:07.802 { 00:06:07.802 "version": "SPDK v24.09-pre git sha1 5c0b15eed", 00:06:07.802 "fields": { 00:06:07.802 "major": 24, 00:06:07.802 "minor": 9, 00:06:07.802 "patch": 0, 00:06:07.802 "suffix": "-pre", 00:06:07.802 "commit": "5c0b15eed" 00:06:07.802 } 00:06:07.802 } 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.802 17:48:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.802 17:48:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.802 17:48:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.802 17:48:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.802 17:48:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.802 17:48:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.802 17:48:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:07.802 17:48:54 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.060 request: 00:06:08.060 { 00:06:08.060 "method": "env_dpdk_get_mem_stats", 00:06:08.060 "req_id": 1 00:06:08.060 } 00:06:08.060 Got JSON-RPC error response 00:06:08.060 response: 00:06:08.060 { 00:06:08.060 "code": -32601, 00:06:08.060 "message": "Method not found" 00:06:08.060 } 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:08.060 17:48:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2674357 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2674357 ']' 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2674357 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674357 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674357' 00:06:08.060 killing process with pid 2674357 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@967 -- # kill 2674357 00:06:08.060 17:48:54 app_cmdline -- common/autotest_common.sh@972 -- # wait 2674357 00:06:08.625 00:06:08.625 real 0m1.649s 00:06:08.625 user 0m1.997s 00:06:08.625 sys 0m0.477s 00:06:08.625 17:48:54 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.625 17:48:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.625 ************************************ 00:06:08.625 END TEST app_cmdline 00:06:08.625 ************************************ 00:06:08.626 17:48:54 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:08.626 17:48:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.626 17:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.626 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.626 ************************************ 00:06:08.626 START TEST version 00:06:08.626 ************************************ 00:06:08.626 17:48:54 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:08.884 * Looking for test storage... 00:06:08.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:08.884 17:48:54 version -- app/version.sh@17 -- # get_header_version major 00:06:08.884 17:48:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # cut -f2 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.884 17:48:54 version -- app/version.sh@17 -- # major=24 00:06:08.884 17:48:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:08.884 17:48:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # cut -f2 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.884 17:48:54 version -- app/version.sh@18 -- # minor=9 00:06:08.884 17:48:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:08.884 17:48:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # cut -f2 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.884 17:48:54 version -- app/version.sh@19 -- # patch=0 00:06:08.884 17:48:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:08.884 17:48:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # cut -f2 00:06:08.884 17:48:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:08.884 17:48:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:08.884 17:48:54 version -- app/version.sh@22 -- # version=24.9 00:06:08.884 17:48:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:08.884 17:48:54 version -- app/version.sh@28 -- # version=24.9rc0 00:06:08.884 17:48:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.884 17:48:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:08.884 17:48:54 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:08.884 17:48:54 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:08.884 00:06:08.884 real 0m0.106s 00:06:08.884 user 0m0.047s 00:06:08.884 sys 0m0.080s 00:06:08.884 17:48:54 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.884 17:48:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 END TEST version 00:06:08.884 ************************************ 00:06:08.884 17:48:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@198 -- # uname -s 00:06:08.884 17:48:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:08.884 17:48:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:08.884 17:48:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:08.884 17:48:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:08.884 17:48:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.884 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 17:48:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:08.884 17:48:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:08.884 17:48:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:08.884 17:48:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.884 17:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.884 17:48:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 START TEST nvmf_tcp 00:06:08.884 ************************************ 00:06:08.884 17:48:55 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:08.884 * Looking for test storage... 00:06:08.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:08.884 17:48:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:08.884 17:48:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:08.884 17:48:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:08.884 17:48:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.884 17:48:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.884 17:48:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 START TEST nvmf_target_core 00:06:08.884 ************************************ 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:08.884 * Looking for test storage... 00:06:08.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.884 17:48:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.885 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.143 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.143 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.143 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.143 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.144 ************************************ 00:06:09.144 START TEST nvmf_abort 00:06:09.144 ************************************ 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:09.144 * Looking for test storage... 00:06:09.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.144 17:48:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.084 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.084 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:11.084 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:11.084 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:11.085 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:11.085 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:11.085 Found net devices under 0000:09:00.0: cvl_0_0 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:11.085 Found net devices under 0000:09:00.1: cvl_0_1 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.085 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:11.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:06:11.344 00:06:11.344 --- 10.0.0.2 ping statistics --- 00:06:11.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.344 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:06:11.344 00:06:11.344 --- 10.0.0.1 ping statistics --- 00:06:11.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.344 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2676404 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2676404 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2676404 ']' 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.344 17:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:11.344 [2024-07-24 17:48:57.503761] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:11.344 [2024-07-24 17:48:57.503870] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.344 [2024-07-24 17:48:57.578553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.603 [2024-07-24 17:48:57.702588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:11.603 [2024-07-24 17:48:57.702648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:11.603 [2024-07-24 17:48:57.702666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.603 [2024-07-24 17:48:57.702679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.603 [2024-07-24 17:48:57.702691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:11.603 [2024-07-24 17:48:57.702887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.603 [2024-07-24 17:48:57.702936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.603 [2024-07-24 17:48:57.702940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.168 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.168 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:12.168 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 [2024-07-24 17:48:58.465339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 Malloc0 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 Delay0 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 [2024-07-24 17:48:58.541475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.427 17:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:12.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.427 [2024-07-24 17:48:58.647296] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:14.961 Initializing NVMe Controllers 00:06:14.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:14.961 controller IO queue size 128 less than required 00:06:14.961 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:14.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:14.961 Initialization complete. Launching workers. 00:06:14.961 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33492 00:06:14.961 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33553, failed to submit 62 00:06:14.961 success 33496, unsuccess 57, failed 0 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:14.961 rmmod nvme_tcp 00:06:14.961 rmmod nvme_fabrics 00:06:14.961 rmmod nvme_keyring 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2676404 ']' 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2676404 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2676404 ']' 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2676404 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2676404 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2676404' 00:06:14.961 killing process with pid 2676404 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2676404 00:06:14.961 17:49:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2676404 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.961 17:49:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.498 00:06:17.498 real 0m8.000s 00:06:17.498 user 0m12.644s 00:06:17.498 sys 0m2.586s 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.498 ************************************ 00:06:17.498 END TEST nvmf_abort 00:06:17.498 ************************************ 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.498 ************************************ 00:06:17.498 START TEST nvmf_ns_hotplug_stress 00:06:17.498 ************************************ 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:17.498 * Looking for test storage... 00:06:17.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.498 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.499 17:49:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.399 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:19.400 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:19.400 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:19.400 Found net devices under 0000:09:00.0: cvl_0_0 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:19.400 Found net devices under 0000:09:00.1: cvl_0_1 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:19.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:06:19.400 00:06:19.400 --- 10.0.0.2 ping statistics --- 00:06:19.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.400 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:06:19.400 00:06:19.400 --- 10.0.0.1 ping statistics --- 00:06:19.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.400 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2678763 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2678763 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2678763 ']' 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.400 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.400 [2024-07-24 17:49:05.600341] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:06:19.400 [2024-07-24 17:49:05.600432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.400 [2024-07-24 17:49:05.665322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.658 [2024-07-24 17:49:05.778581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.658 [2024-07-24 17:49:05.778647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.658 [2024-07-24 17:49:05.778661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.658 [2024-07-24 17:49:05.778672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.658 [2024-07-24 17:49:05.778683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.658 [2024-07-24 17:49:05.778769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.658 [2024-07-24 17:49:05.778832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.658 [2024-07-24 17:49:05.778835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:19.658 17:49:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:20.223 [2024-07-24 17:49:06.193259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.223 17:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.223 17:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.480 [2024-07-24 17:49:06.716745] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.480 17:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.737 17:49:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:20.995 Malloc0 00:06:20.995 17:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:21.253 Delay0 00:06:21.253 17:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.511 17:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:21.769 NULL1 00:06:21.769 17:49:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:22.026 17:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2679060 00:06:22.026 17:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:22.026 17:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:22.026 17:49:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.414 Read completed with error (sct=0, sc=11) 00:06:23.414 17:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.671 17:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:23.671 17:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:23.671 true 00:06:23.929 17:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:23.929 17:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.494 17:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.751 17:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:24.751 17:49:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:25.008 true 00:06:25.008 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:25.008 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.266 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.523 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:25.523 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:25.781 true 00:06:25.781 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:25.781 17:49:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.038 17:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.296 17:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:26.296 17:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:26.554 true 00:06:26.554 17:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:26.554 17:49:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.927 17:49:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.927 17:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:27.927 17:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:28.185 true 00:06:28.185 17:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:28.185 17:49:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.117 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.375 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:29.375 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:29.632 true 00:06:29.632 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:29.632 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.897 17:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.198 17:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:30.199 17:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:30.199 true 00:06:30.199 17:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:30.199 17:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.571 17:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.571 17:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:31.571 17:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:31.829 true 00:06:31.829 17:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:31.829 17:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.088 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.345 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:32.345 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:32.603 true 00:06:32.603 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:32.603 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.861 17:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.118 17:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:33.118 17:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:33.376 true 00:06:33.376 17:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:33.376 17:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.309 17:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.567 17:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:34.567 17:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:34.825 true 00:06:34.825 17:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:34.825 17:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.083 17:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.340 17:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:35.340 17:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:35.598 true 00:06:35.598 17:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:35.598 17:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.856 17:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.113 17:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:36.113 17:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:36.370 true 00:06:36.370 17:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:36.370 17:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.302 17:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.816 17:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:37.816 17:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:37.816 true 00:06:37.816 17:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:37.816 17:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.749 17:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.006 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:39.006 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:39.264 true 00:06:39.264 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:39.264 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.521 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.779 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:39.779 17:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:40.036 true 00:06:40.036 17:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:40.036 17:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.971 17:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.971 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:40.971 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:41.228 true 00:06:41.228 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:41.228 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.485 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.743 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:41.743 17:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:42.000 true 00:06:42.000 17:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:42.000 17:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.933 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.190 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:43.190 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:43.448 true 00:06:43.448 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:43.448 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.706 17:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.964 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:43.964 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:44.221 true 00:06:44.221 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:44.221 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.479 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.736 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:44.736 17:49:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:44.994 true 00:06:44.994 17:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:44.994 17:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.367 17:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.367 17:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:46.367 17:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:46.624 true 00:06:46.624 17:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:46.624 17:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.559 17:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.843 17:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:47.843 17:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:48.110 true 00:06:48.110 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:48.110 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.368 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.368 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:48.368 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:48.625 true 00:06:48.625 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:48.625 17:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.558 17:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.815 17:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:49.815 17:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:50.073 true 00:06:50.073 17:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:50.073 17:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.331 17:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.589 17:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:50.589 17:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:50.847 true 00:06:50.847 17:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:50.847 17:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.779 17:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.037 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:52.037 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:52.295 Initializing NVMe Controllers 00:06:52.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.295 Controller IO queue size 128, less than required. 00:06:52.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.295 Controller IO queue size 128, less than required. 00:06:52.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:52.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:52.295 Initialization complete. Launching workers. 00:06:52.295 ======================================================== 00:06:52.295 Latency(us) 00:06:52.295 Device Information : IOPS MiB/s Average min max 00:06:52.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1446.06 0.71 48163.44 2510.08 1059812.56 00:06:52.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11296.63 5.52 11331.84 3647.97 449288.17 00:06:52.295 ======================================================== 00:06:52.295 Total : 12742.69 6.22 15511.54 2510.08 1059812.56 00:06:52.295 00:06:52.295 true 00:06:52.295 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2679060 00:06:52.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2679060) - No such process 00:06:52.295 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2679060 00:06:52.295 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.553 17:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.809 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:52.809 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:52.809 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:52.809 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:52.809 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:53.065 null0 00:06:53.065 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.065 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.065 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:53.322 null1 00:06:53.322 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.322 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.322 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:53.580 null2 00:06:53.580 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.580 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.580 17:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:53.838 null3 00:06:53.838 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:53.838 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:53.838 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:54.095 null4 00:06:54.095 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.095 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.095 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:54.353 null5 00:06:54.353 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.353 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.353 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:54.610 null6 00:06:54.610 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.610 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.610 17:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:54.868 null7 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.868 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2683060 2683062 2683064 2683066 2683068 2683070 2683073 2683076 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.869 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.127 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.385 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.386 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.644 17:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.902 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.158 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.415 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.673 17:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.932 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.190 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.447 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.705 17:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.963 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.222 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.479 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.737 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.738 17:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.995 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.996 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.254 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.512 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.770 17:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.028 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.286 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.286 rmmod nvme_tcp 00:07:00.286 rmmod nvme_fabrics 00:07:00.286 rmmod nvme_keyring 00:07:00.544 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.544 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:00.544 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:00.544 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2678763 ']' 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2678763 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2678763 ']' 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2678763 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2678763 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2678763' 00:07:00.545 killing process with pid 2678763 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2678763 00:07:00.545 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2678763 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.804 17:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.706 00:07:02.706 real 0m45.701s 00:07:02.706 user 3m21.595s 00:07:02.706 sys 0m19.065s 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.706 ************************************ 00:07:02.706 END TEST nvmf_ns_hotplug_stress 00:07:02.706 ************************************ 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.706 17:49:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.964 ************************************ 00:07:02.964 START TEST nvmf_delete_subsystem 00:07:02.964 ************************************ 00:07:02.964 17:49:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:02.964 * Looking for test storage... 00:07:02.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.964 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.964 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.965 17:49:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:04.867 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:04.867 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:04.867 Found net devices under 0000:09:00.0: cvl_0_0 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:04.867 Found net devices under 0000:09:00.1: cvl_0_1 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:04.867 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:07:05.139 00:07:05.139 --- 10.0.0.2 ping statistics --- 00:07:05.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.139 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:07:05.139 00:07:05.139 --- 10.0.0.1 ping statistics --- 00:07:05.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.139 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2685873 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2685873 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2685873 ']' 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.139 17:49:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.139 [2024-07-24 17:49:51.247438] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:05.139 [2024-07-24 17:49:51.247546] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.139 [2024-07-24 17:49:51.313596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.444 [2024-07-24 17:49:51.434135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.444 [2024-07-24 17:49:51.434193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.444 [2024-07-24 17:49:51.434220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.444 [2024-07-24 17:49:51.434234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.444 [2024-07-24 17:49:51.434246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.444 [2024-07-24 17:49:51.434311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.444 [2024-07-24 17:49:51.434317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.009 [2024-07-24 17:49:52.215184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.009 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 [2024-07-24 17:49:52.231484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 NULL1 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 Delay0 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2685979 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:06.010 17:49:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.267 [2024-07-24 17:49:52.306053] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:08.182 17:49:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.182 17:49:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.182 17:49:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 [2024-07-24 17:49:54.414424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af3e0 is same with the state(6) to be set 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 starting I/O failed: -6 00:07:08.182 Write completed with error (sct=0, sc=8) 00:07:08.182 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 starting I/O failed: -6 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 [2024-07-24 17:49:54.415172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6dac000c00 is same with the state(6) to be set 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Read completed with error (sct=0, sc=8) 00:07:08.183 Write completed with error (sct=0, sc=8) 00:07:09.115 [2024-07-24 17:49:55.365019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0ac0 is same with the state(6) to be set 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 [2024-07-24 17:49:55.413967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8af5c0 is same with the state(6) to be set 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 [2024-07-24 17:49:55.414153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8afc20 is same with the state(6) to be set 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 [2024-07-24 17:49:55.415020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6dac00d660 is same with the state(6) to be set 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Write completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.373 Read completed with error (sct=0, sc=8) 00:07:09.374 [2024-07-24 17:49:55.415718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6dac00d000 is same with the state(6) to be set 00:07:09.374 Initializing NVMe Controllers 00:07:09.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.374 Controller IO queue size 128, less than required. 00:07:09.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.374 Initialization complete. Launching workers. 00:07:09.374 ======================================================== 00:07:09.374 Latency(us) 00:07:09.374 Device Information : IOPS MiB/s Average min max 00:07:09.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.79 0.08 905955.22 712.08 1011941.11 00:07:09.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.83 0.08 934374.13 404.84 2001331.69 00:07:09.374 ======================================================== 00:07:09.374 Total : 324.62 0.16 919947.41 404.84 2001331.69 00:07:09.374 00:07:09.374 [2024-07-24 17:49:55.416293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0ac0 (9): Bad file descriptor 00:07:09.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:09.374 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.374 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:09.374 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2685979 00:07:09.374 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2685979 00:07:09.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2685979) - No such process 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2685979 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2685979 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2685979 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.940 [2024-07-24 17:49:55.938401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2686441 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:09.940 17:49:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.940 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.940 [2024-07-24 17:49:55.996440] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:10.198 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.198 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:10.198 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.762 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.762 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:10.762 17:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.327 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.327 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:11.327 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.892 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.892 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:11.892 17:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.458 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.458 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:12.458 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.715 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.715 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:12.715 17:49:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.973 Initializing NVMe Controllers 00:07:12.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.973 Controller IO queue size 128, less than required. 00:07:12.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:12.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:12.973 Initialization complete. Launching workers. 00:07:12.973 ======================================================== 00:07:12.973 Latency(us) 00:07:12.973 Device Information : IOPS MiB/s Average min max 00:07:12.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004324.96 1000199.86 1041796.97 00:07:12.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003261.20 1000253.71 1010066.70 00:07:12.973 ======================================================== 00:07:12.973 Total : 256.00 0.12 1003793.08 1000199.86 1041796.97 00:07:12.973 00:07:13.230 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.230 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2686441 00:07:13.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2686441) - No such process 00:07:13.230 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2686441 00:07:13.230 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.231 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.231 rmmod nvme_tcp 00:07:13.489 rmmod nvme_fabrics 00:07:13.489 rmmod nvme_keyring 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2685873 ']' 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2685873 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2685873 ']' 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2685873 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2685873 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2685873' 00:07:13.489 killing process with pid 2685873 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2685873 00:07:13.489 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2685873 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.751 17:49:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.651 00:07:15.651 real 0m12.914s 00:07:15.651 user 0m29.086s 00:07:15.651 sys 0m2.966s 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.651 ************************************ 00:07:15.651 END TEST nvmf_delete_subsystem 00:07:15.651 ************************************ 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.651 17:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.909 ************************************ 00:07:15.909 START TEST nvmf_host_management 00:07:15.909 ************************************ 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.909 * Looking for test storage... 00:07:15.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.909 17:50:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.909 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.910 17:50:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:17.823 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:17.823 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:17.823 Found net devices under 0000:09:00.0: cvl_0_0 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:17.823 Found net devices under 0000:09:00.1: cvl_0_1 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.823 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.824 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:07:18.082 00:07:18.082 --- 10.0.0.2 ping statistics --- 00:07:18.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.082 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:07:18.082 00:07:18.082 --- 10.0.0.1 ping statistics --- 00:07:18.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.082 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2688786 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2688786 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2688786 ']' 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.082 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.082 [2024-07-24 17:50:04.252482] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:18.082 [2024-07-24 17:50:04.252558] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.082 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.082 [2024-07-24 17:50:04.316285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.339 [2024-07-24 17:50:04.429836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.339 [2024-07-24 17:50:04.429905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.339 [2024-07-24 17:50:04.429919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.339 [2024-07-24 17:50:04.429931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.339 [2024-07-24 17:50:04.429940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.339 [2024-07-24 17:50:04.429997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.339 [2024-07-24 17:50:04.430051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.339 [2024-07-24 17:50:04.430123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.339 [2024-07-24 17:50:04.430127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.339 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.339 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:18.339 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.339 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.340 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.340 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.340 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.340 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.340 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.340 [2024-07-24 17:50:04.601659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.598 Malloc0 00:07:18.598 [2024-07-24 17:50:04.661849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2688927 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2688927 /var/tmp/bdevperf.sock 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2688927 ']' 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:18.598 { 00:07:18.598 "params": { 00:07:18.598 "name": "Nvme$subsystem", 00:07:18.598 "trtype": "$TEST_TRANSPORT", 00:07:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.598 "adrfam": "ipv4", 00:07:18.598 "trsvcid": "$NVMF_PORT", 00:07:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.598 "hdgst": ${hdgst:-false}, 00:07:18.598 "ddgst": ${ddgst:-false} 00:07:18.598 }, 00:07:18.598 "method": "bdev_nvme_attach_controller" 00:07:18.598 } 00:07:18.598 EOF 00:07:18.598 )") 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:18.598 17:50:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:18.598 "params": { 00:07:18.598 "name": "Nvme0", 00:07:18.598 "trtype": "tcp", 00:07:18.598 "traddr": "10.0.0.2", 00:07:18.598 "adrfam": "ipv4", 00:07:18.598 "trsvcid": "4420", 00:07:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:18.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:18.598 "hdgst": false, 00:07:18.598 "ddgst": false 00:07:18.598 }, 00:07:18.598 "method": "bdev_nvme_attach_controller" 00:07:18.598 }' 00:07:18.598 [2024-07-24 17:50:04.737874] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:18.598 [2024-07-24 17:50:04.737967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688927 ] 00:07:18.598 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.598 [2024-07-24 17:50:04.798323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.856 [2024-07-24 17:50:04.911487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.856 Running I/O for 10 seconds... 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:07:19.127 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.386 [2024-07-24 17:50:05.484605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a82650 is same with the state(6) to be set 00:07:19.386 [2024-07-24 17:50:05.484689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a82650 is same with the state(6) to be set 00:07:19.386 [2024-07-24 17:50:05.484704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a82650 is same with the state(6) to be set 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.386 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:19.386 [2024-07-24 17:50:05.494227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.386 [2024-07-24 17:50:05.494268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.386 [2024-07-24 17:50:05.494287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.386 [2024-07-24 17:50:05.494310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.387 [2024-07-24 17:50:05.494339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.387 [2024-07-24 17:50:05.494365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221d790 is same with the state(6) to be set 00:07:19.387 [2024-07-24 17:50:05.494455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.494975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.494990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.387 [2024-07-24 17:50:05.495553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.387 [2024-07-24 17:50:05.495566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.495976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.495991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.388 [2024-07-24 17:50:05.496526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:19.388 [2024-07-24 17:50:05.496610] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x262e5a0 was disconnected and freed. reset controller. 00:07:19.388 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.388 17:50:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:19.388 [2024-07-24 17:50:05.497735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:19.388 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:19.388 00:07:19.388 Latency(us) 00:07:19.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.388 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:19.388 Verification LBA range: start 0x0 length 0x400 00:07:19.388 Nvme0n1 : 0.39 1479.24 92.45 164.36 0.00 37807.32 2997.67 35146.71 00:07:19.388 =================================================================================================================== 00:07:19.388 Total : 1479.24 92.45 164.36 0.00 37807.32 2997.67 35146.71 00:07:19.388 [2024-07-24 17:50:05.499614] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.388 [2024-07-24 17:50:05.499644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221d790 (9): Bad file descriptor 00:07:19.388 [2024-07-24 17:50:05.547708] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2688927 00:07:20.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2688927) - No such process 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:20.321 { 00:07:20.321 "params": { 00:07:20.321 "name": "Nvme$subsystem", 00:07:20.321 "trtype": "$TEST_TRANSPORT", 00:07:20.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.321 "adrfam": "ipv4", 00:07:20.321 "trsvcid": "$NVMF_PORT", 00:07:20.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.321 "hdgst": ${hdgst:-false}, 00:07:20.321 "ddgst": ${ddgst:-false} 00:07:20.321 }, 00:07:20.321 "method": "bdev_nvme_attach_controller" 00:07:20.321 } 00:07:20.321 EOF 00:07:20.321 )") 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:20.321 17:50:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:20.321 "params": { 00:07:20.321 "name": "Nvme0", 00:07:20.321 "trtype": "tcp", 00:07:20.321 "traddr": "10.0.0.2", 00:07:20.321 "adrfam": "ipv4", 00:07:20.321 "trsvcid": "4420", 00:07:20.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.321 "hdgst": false, 00:07:20.321 "ddgst": false 00:07:20.321 }, 00:07:20.321 "method": "bdev_nvme_attach_controller" 00:07:20.321 }' 00:07:20.321 [2024-07-24 17:50:06.545570] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:20.321 [2024-07-24 17:50:06.545670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689109 ] 00:07:20.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.579 [2024-07-24 17:50:06.605984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.579 [2024-07-24 17:50:06.720444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.836 Running I/O for 1 seconds... 00:07:21.769 00:07:21.769 Latency(us) 00:07:21.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.769 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.769 Verification LBA range: start 0x0 length 0x400 00:07:21.769 Nvme0n1 : 1.03 1433.20 89.57 0.00 0.00 43980.93 9854.67 37282.70 00:07:21.769 =================================================================================================================== 00:07:21.769 Total : 1433.20 89.57 0.00 0.00 43980.93 9854.67 37282.70 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.027 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.027 rmmod nvme_tcp 00:07:22.027 rmmod nvme_fabrics 00:07:22.027 rmmod nvme_keyring 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2688786 ']' 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2688786 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2688786 ']' 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2688786 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688786 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688786' 00:07:22.285 killing process with pid 2688786 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2688786 00:07:22.285 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2688786 00:07:22.544 [2024-07-24 17:50:08.618733] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.544 17:50:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.447 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:24.447 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:24.447 00:07:24.447 real 0m8.752s 00:07:24.447 user 0m19.617s 00:07:24.447 sys 0m2.741s 00:07:24.447 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.447 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.447 ************************************ 00:07:24.447 END TEST nvmf_host_management 00:07:24.447 ************************************ 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.714 ************************************ 00:07:24.714 START TEST nvmf_lvol 00:07:24.714 ************************************ 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.714 * Looking for test storage... 00:07:24.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.714 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.715 17:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.622 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:26.623 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:26.623 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:26.623 Found net devices under 0000:09:00.0: cvl_0_0 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:26.623 Found net devices under 0000:09:00.1: cvl_0_1 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.623 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:07:26.881 00:07:26.881 --- 10.0.0.2 ping statistics --- 00:07:26.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.881 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:26.881 00:07:26.881 --- 10.0.0.1 ping statistics --- 00:07:26.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.881 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2691308 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2691308 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2691308 ']' 00:07:26.881 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.882 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.882 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.882 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.882 17:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.882 [2024-07-24 17:50:13.003929] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:26.882 [2024-07-24 17:50:13.004015] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.882 [2024-07-24 17:50:13.068352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.140 [2024-07-24 17:50:13.179734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.140 [2024-07-24 17:50:13.179782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.140 [2024-07-24 17:50:13.179796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.140 [2024-07-24 17:50:13.179806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.140 [2024-07-24 17:50:13.179817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.140 [2024-07-24 17:50:13.179873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.140 [2024-07-24 17:50:13.179928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.140 [2024-07-24 17:50:13.179930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.140 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.397 [2024-07-24 17:50:13.572392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.397 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:27.656 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:27.656 17:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:27.912 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:27.912 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:28.168 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:28.425 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e96d6a0e-5e10-415d-9835-c77e61b1e754 00:07:28.425 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e96d6a0e-5e10-415d-9835-c77e61b1e754 lvol 20 00:07:28.682 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=44c32e97-8373-4753-9456-097f18f6622b 00:07:28.682 17:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.938 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44c32e97-8373-4753-9456-097f18f6622b 00:07:29.196 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:29.453 [2024-07-24 17:50:15.643470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.453 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.710 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2691643 00:07:29.710 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:29.710 17:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:29.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.081 17:50:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 44c32e97-8373-4753-9456-097f18f6622b MY_SNAPSHOT 00:07:31.081 17:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dc5676dc-541a-47b1-a108-b708d1c6f7fe 00:07:31.081 17:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 44c32e97-8373-4753-9456-097f18f6622b 30 00:07:31.339 17:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dc5676dc-541a-47b1-a108-b708d1c6f7fe MY_CLONE 00:07:31.596 17:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3037c98c-2147-4be3-ad50-5ccac615ed57 00:07:31.596 17:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3037c98c-2147-4be3-ad50-5ccac615ed57 00:07:32.526 17:50:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2691643 00:07:40.632 Initializing NVMe Controllers 00:07:40.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:40.632 Controller IO queue size 128, less than required. 00:07:40.632 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:40.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:40.632 Initialization complete. Launching workers. 00:07:40.632 ======================================================== 00:07:40.632 Latency(us) 00:07:40.632 Device Information : IOPS MiB/s Average min max 00:07:40.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10707.50 41.83 11958.70 3423.54 84314.09 00:07:40.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10562.50 41.26 12122.25 1862.53 57474.42 00:07:40.632 ======================================================== 00:07:40.632 Total : 21270.00 83.09 12039.92 1862.53 84314.09 00:07:40.632 00:07:40.632 17:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.632 17:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44c32e97-8373-4753-9456-097f18f6622b 00:07:40.890 17:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e96d6a0e-5e10-415d-9835-c77e61b1e754 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.147 rmmod nvme_tcp 00:07:41.147 rmmod nvme_fabrics 00:07:41.147 rmmod nvme_keyring 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2691308 ']' 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2691308 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2691308 ']' 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2691308 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691308 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691308' 00:07:41.147 killing process with pid 2691308 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2691308 00:07:41.147 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2691308 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.406 17:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.942 00:07:43.942 real 0m18.911s 00:07:43.942 user 1m4.099s 00:07:43.942 sys 0m5.784s 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.942 ************************************ 00:07:43.942 END TEST nvmf_lvol 00:07:43.942 ************************************ 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.942 ************************************ 00:07:43.942 START TEST nvmf_lvs_grow 00:07:43.942 ************************************ 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.942 * Looking for test storage... 00:07:43.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.942 17:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:45.844 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:45.844 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:45.844 Found net devices under 0000:09:00.0: cvl_0_0 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:45.844 Found net devices under 0000:09:00.1: cvl_0_1 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.844 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:07:45.845 00:07:45.845 --- 10.0.0.2 ping statistics --- 00:07:45.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.845 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:45.845 00:07:45.845 --- 10.0.0.1 ping statistics --- 00:07:45.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.845 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2695001 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2695001 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2695001 ']' 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.845 17:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.845 [2024-07-24 17:50:31.912395] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:45.845 [2024-07-24 17:50:31.912501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.845 [2024-07-24 17:50:31.984666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.845 [2024-07-24 17:50:32.105954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.845 [2024-07-24 17:50:32.106015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.845 [2024-07-24 17:50:32.106031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.845 [2024-07-24 17:50:32.106045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.845 [2024-07-24 17:50:32.106056] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.845 [2024-07-24 17:50:32.106097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.103 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:46.361 [2024-07-24 17:50:32.529250] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.361 ************************************ 00:07:46.361 START TEST lvs_grow_clean 00:07:46.361 ************************************ 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.361 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.619 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.619 17:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.876 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0fc69843-900e-46f0-8c2a-f1334f373afd 00:07:46.876 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:07:46.876 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.134 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.134 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.134 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0fc69843-900e-46f0-8c2a-f1334f373afd lvol 150 00:07:47.392 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=07f8e9cc-0a3b-4b11-bf52-c14467d4368c 00:07:47.392 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.392 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.651 [2024-07-24 17:50:33.843290] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.651 [2024-07-24 17:50:33.843383] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.651 true 00:07:47.651 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:07:47.651 17:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.909 17:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.909 17:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.167 17:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07f8e9cc-0a3b-4b11-bf52-c14467d4368c 00:07:48.488 17:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.758 [2024-07-24 17:50:34.830362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.758 17:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2695326 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2695326 /var/tmp/bdevperf.sock 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2695326 ']' 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.015 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:49.015 [2024-07-24 17:50:35.179463] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:07:49.015 [2024-07-24 17:50:35.179545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695326 ] 00:07:49.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.015 [2024-07-24 17:50:35.243220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.273 [2024-07-24 17:50:35.360179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.273 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.273 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:49.273 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.838 Nvme0n1 00:07:49.838 17:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.096 [ 00:07:50.096 { 00:07:50.096 "name": "Nvme0n1", 00:07:50.096 "aliases": [ 00:07:50.096 "07f8e9cc-0a3b-4b11-bf52-c14467d4368c" 00:07:50.096 ], 00:07:50.096 "product_name": "NVMe disk", 00:07:50.096 "block_size": 4096, 00:07:50.096 "num_blocks": 38912, 00:07:50.096 "uuid": "07f8e9cc-0a3b-4b11-bf52-c14467d4368c", 00:07:50.096 "assigned_rate_limits": { 00:07:50.096 "rw_ios_per_sec": 0, 00:07:50.096 "rw_mbytes_per_sec": 0, 00:07:50.096 "r_mbytes_per_sec": 0, 00:07:50.096 "w_mbytes_per_sec": 0 00:07:50.096 }, 00:07:50.096 "claimed": false, 00:07:50.096 "zoned": false, 00:07:50.096 "supported_io_types": { 00:07:50.096 "read": true, 00:07:50.096 "write": true, 00:07:50.096 "unmap": true, 00:07:50.096 "flush": true, 00:07:50.096 "reset": true, 00:07:50.096 "nvme_admin": true, 00:07:50.096 "nvme_io": true, 00:07:50.096 "nvme_io_md": false, 00:07:50.096 "write_zeroes": true, 00:07:50.096 "zcopy": false, 00:07:50.096 "get_zone_info": false, 00:07:50.096 "zone_management": false, 00:07:50.096 "zone_append": false, 00:07:50.096 "compare": true, 00:07:50.096 "compare_and_write": true, 00:07:50.096 "abort": true, 00:07:50.096 "seek_hole": false, 00:07:50.096 "seek_data": false, 00:07:50.096 "copy": true, 00:07:50.096 "nvme_iov_md": false 00:07:50.096 }, 00:07:50.096 "memory_domains": [ 00:07:50.096 { 00:07:50.096 "dma_device_id": "system", 00:07:50.096 "dma_device_type": 1 00:07:50.096 } 00:07:50.096 ], 00:07:50.096 "driver_specific": { 00:07:50.096 "nvme": [ 00:07:50.096 { 00:07:50.096 "trid": { 00:07:50.096 "trtype": "TCP", 00:07:50.096 "adrfam": "IPv4", 00:07:50.096 "traddr": "10.0.0.2", 00:07:50.096 "trsvcid": "4420", 00:07:50.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.096 }, 00:07:50.096 "ctrlr_data": { 00:07:50.096 "cntlid": 1, 00:07:50.096 "vendor_id": "0x8086", 00:07:50.096 "model_number": "SPDK bdev Controller", 00:07:50.096 "serial_number": "SPDK0", 00:07:50.096 "firmware_revision": "24.09", 00:07:50.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.096 "oacs": { 00:07:50.096 "security": 0, 00:07:50.096 "format": 0, 00:07:50.096 "firmware": 0, 00:07:50.096 "ns_manage": 0 00:07:50.096 }, 00:07:50.096 "multi_ctrlr": true, 00:07:50.096 "ana_reporting": false 00:07:50.096 }, 00:07:50.096 "vs": { 00:07:50.096 "nvme_version": "1.3" 00:07:50.096 }, 00:07:50.096 "ns_data": { 00:07:50.096 "id": 1, 00:07:50.096 "can_share": true 00:07:50.096 } 00:07:50.096 } 00:07:50.096 ], 00:07:50.096 "mp_policy": "active_passive" 00:07:50.096 } 00:07:50.096 } 00:07:50.096 ] 00:07:50.096 17:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2695457 00:07:50.096 17:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.096 17:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.096 Running I/O for 10 seconds... 00:07:51.030 Latency(us) 00:07:51.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.030 Nvme0n1 : 1.00 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:07:51.030 =================================================================================================================== 00:07:51.030 Total : 14623.00 57.12 0.00 0.00 0.00 0.00 0.00 00:07:51.030 00:07:51.964 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:07:51.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.964 Nvme0n1 : 2.00 14757.50 57.65 0.00 0.00 0.00 0.00 0.00 00:07:51.964 =================================================================================================================== 00:07:51.964 Total : 14757.50 57.65 0.00 0.00 0.00 0.00 0.00 00:07:51.964 00:07:52.223 true 00:07:52.223 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:07:52.223 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.481 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.481 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.481 17:50:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2695457 00:07:53.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.047 Nvme0n1 : 3.00 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:07:53.047 =================================================================================================================== 00:07:53.047 Total : 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:07:53.047 00:07:53.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.981 Nvme0n1 : 4.00 14877.25 58.11 0.00 0.00 0.00 0.00 0.00 00:07:53.981 =================================================================================================================== 00:07:53.981 Total : 14877.25 58.11 0.00 0.00 0.00 0.00 0.00 00:07:53.981 00:07:55.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.353 Nvme0n1 : 5.00 14964.00 58.45 0.00 0.00 0.00 0.00 0.00 00:07:55.353 =================================================================================================================== 00:07:55.353 Total : 14964.00 58.45 0.00 0.00 0.00 0.00 0.00 00:07:55.353 00:07:56.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.288 Nvme0n1 : 6.00 15021.83 58.68 0.00 0.00 0.00 0.00 0.00 00:07:56.288 =================================================================================================================== 00:07:56.288 Total : 15021.83 58.68 0.00 0.00 0.00 0.00 0.00 00:07:56.288 00:07:57.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.221 Nvme0n1 : 7.00 15063.57 58.84 0.00 0.00 0.00 0.00 0.00 00:07:57.221 =================================================================================================================== 00:07:57.221 Total : 15063.57 58.84 0.00 0.00 0.00 0.00 0.00 00:07:57.221 00:07:58.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.154 Nvme0n1 : 8.00 15103.62 59.00 0.00 0.00 0.00 0.00 0.00 00:07:58.154 =================================================================================================================== 00:07:58.154 Total : 15103.62 59.00 0.00 0.00 0.00 0.00 0.00 00:07:58.154 00:07:59.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.086 Nvme0n1 : 9.00 15107.00 59.01 0.00 0.00 0.00 0.00 0.00 00:07:59.086 =================================================================================================================== 00:07:59.086 Total : 15107.00 59.01 0.00 0.00 0.00 0.00 0.00 00:07:59.086 00:08:00.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.018 Nvme0n1 : 10.00 15132.60 59.11 0.00 0.00 0.00 0.00 0.00 00:08:00.018 =================================================================================================================== 00:08:00.018 Total : 15132.60 59.11 0.00 0.00 0.00 0.00 0.00 00:08:00.018 00:08:00.018 00:08:00.018 Latency(us) 00:08:00.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.018 Nvme0n1 : 10.00 15140.46 59.14 0.00 0.00 8449.52 4878.79 16796.63 00:08:00.018 =================================================================================================================== 00:08:00.018 Total : 15140.46 59.14 0.00 0.00 8449.52 4878.79 16796.63 00:08:00.018 0 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2695326 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2695326 ']' 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2695326 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.018 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2695326 00:08:00.276 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:00.276 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:00.276 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2695326' 00:08:00.276 killing process with pid 2695326 00:08:00.276 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2695326 00:08:00.276 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.276 00:08:00.276 Latency(us) 00:08:00.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.276 =================================================================================================================== 00:08:00.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.276 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2695326 00:08:00.534 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.792 17:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.050 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:01.050 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.050 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:01.050 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:01.050 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.308 [2024-07-24 17:50:47.526795] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:01.308 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:01.573 request: 00:08:01.573 { 00:08:01.573 "uuid": "0fc69843-900e-46f0-8c2a-f1334f373afd", 00:08:01.573 "method": "bdev_lvol_get_lvstores", 00:08:01.573 "req_id": 1 00:08:01.573 } 00:08:01.573 Got JSON-RPC error response 00:08:01.573 response: 00:08:01.573 { 00:08:01.573 "code": -19, 00:08:01.573 "message": "No such device" 00:08:01.573 } 00:08:01.573 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:01.573 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.573 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.573 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.573 17:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.831 aio_bdev 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 07f8e9cc-0a3b-4b11-bf52-c14467d4368c 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=07f8e9cc-0a3b-4b11-bf52-c14467d4368c 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.089 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 07f8e9cc-0a3b-4b11-bf52-c14467d4368c -t 2000 00:08:02.655 [ 00:08:02.655 { 00:08:02.655 "name": "07f8e9cc-0a3b-4b11-bf52-c14467d4368c", 00:08:02.655 "aliases": [ 00:08:02.655 "lvs/lvol" 00:08:02.655 ], 00:08:02.655 "product_name": "Logical Volume", 00:08:02.655 "block_size": 4096, 00:08:02.655 "num_blocks": 38912, 00:08:02.655 "uuid": "07f8e9cc-0a3b-4b11-bf52-c14467d4368c", 00:08:02.655 "assigned_rate_limits": { 00:08:02.655 "rw_ios_per_sec": 0, 00:08:02.655 "rw_mbytes_per_sec": 0, 00:08:02.655 "r_mbytes_per_sec": 0, 00:08:02.655 "w_mbytes_per_sec": 0 00:08:02.655 }, 00:08:02.655 "claimed": false, 00:08:02.655 "zoned": false, 00:08:02.655 "supported_io_types": { 00:08:02.655 "read": true, 00:08:02.655 "write": true, 00:08:02.655 "unmap": true, 00:08:02.655 "flush": false, 00:08:02.655 "reset": true, 00:08:02.655 "nvme_admin": false, 00:08:02.655 "nvme_io": false, 00:08:02.655 "nvme_io_md": false, 00:08:02.655 "write_zeroes": true, 00:08:02.655 "zcopy": false, 00:08:02.655 "get_zone_info": false, 00:08:02.655 "zone_management": false, 00:08:02.655 "zone_append": false, 00:08:02.655 "compare": false, 00:08:02.655 "compare_and_write": false, 00:08:02.655 "abort": false, 00:08:02.655 "seek_hole": true, 00:08:02.655 "seek_data": true, 00:08:02.655 "copy": false, 00:08:02.655 "nvme_iov_md": false 00:08:02.655 }, 00:08:02.655 "driver_specific": { 00:08:02.655 "lvol": { 00:08:02.655 "lvol_store_uuid": "0fc69843-900e-46f0-8c2a-f1334f373afd", 00:08:02.655 "base_bdev": "aio_bdev", 00:08:02.655 "thin_provision": false, 00:08:02.655 "num_allocated_clusters": 38, 00:08:02.655 "snapshot": false, 00:08:02.655 "clone": false, 00:08:02.655 "esnap_clone": false 00:08:02.655 } 00:08:02.655 } 00:08:02.655 } 00:08:02.655 ] 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:02.655 17:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.913 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.913 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 07f8e9cc-0a3b-4b11-bf52-c14467d4368c 00:08:03.171 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fc69843-900e-46f0-8c2a-f1334f373afd 00:08:03.429 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.687 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.945 00:08:03.945 real 0m17.382s 00:08:03.945 user 0m16.881s 00:08:03.945 sys 0m1.936s 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:03.945 ************************************ 00:08:03.945 END TEST lvs_grow_clean 00:08:03.945 ************************************ 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.945 17:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.945 ************************************ 00:08:03.945 START TEST lvs_grow_dirty 00:08:03.945 ************************************ 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.945 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.946 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.946 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.946 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.204 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.204 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.462 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:04.462 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:04.462 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:04.720 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:04.720 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:04.720 17:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fcb2de0-7119-408b-851e-ceb30a60702c lvol 150 00:08:04.978 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:04.978 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.978 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:05.235 [2024-07-24 17:50:51.304419] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:05.235 [2024-07-24 17:50:51.304509] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:05.235 true 00:08:05.235 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:05.235 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:05.493 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:05.493 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.751 17:50:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:06.009 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:06.267 [2024-07-24 17:50:52.339548] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.267 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2697504 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2697504 /var/tmp/bdevperf.sock 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2697504 ']' 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.526 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.526 [2024-07-24 17:50:52.640520] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:06.526 [2024-07-24 17:50:52.640600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697504 ] 00:08:06.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.526 [2024-07-24 17:50:52.702517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.785 [2024-07-24 17:50:52.820747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.785 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.785 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:06.785 17:50:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:07.349 Nvme0n1 00:08:07.349 17:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:07.607 [ 00:08:07.607 { 00:08:07.607 "name": "Nvme0n1", 00:08:07.607 "aliases": [ 00:08:07.607 "a1818d7c-b4dc-4bb9-820d-12fee0ca9478" 00:08:07.607 ], 00:08:07.607 "product_name": "NVMe disk", 00:08:07.607 "block_size": 4096, 00:08:07.607 "num_blocks": 38912, 00:08:07.607 "uuid": "a1818d7c-b4dc-4bb9-820d-12fee0ca9478", 00:08:07.607 "assigned_rate_limits": { 00:08:07.607 "rw_ios_per_sec": 0, 00:08:07.607 "rw_mbytes_per_sec": 0, 00:08:07.607 "r_mbytes_per_sec": 0, 00:08:07.607 "w_mbytes_per_sec": 0 00:08:07.607 }, 00:08:07.607 "claimed": false, 00:08:07.607 "zoned": false, 00:08:07.607 "supported_io_types": { 00:08:07.607 "read": true, 00:08:07.607 "write": true, 00:08:07.607 "unmap": true, 00:08:07.607 "flush": true, 00:08:07.607 "reset": true, 00:08:07.607 "nvme_admin": true, 00:08:07.607 "nvme_io": true, 00:08:07.607 "nvme_io_md": false, 00:08:07.607 "write_zeroes": true, 00:08:07.607 "zcopy": false, 00:08:07.607 "get_zone_info": false, 00:08:07.607 "zone_management": false, 00:08:07.607 "zone_append": false, 00:08:07.607 "compare": true, 00:08:07.607 "compare_and_write": true, 00:08:07.607 "abort": true, 00:08:07.607 "seek_hole": false, 00:08:07.607 "seek_data": false, 00:08:07.607 "copy": true, 00:08:07.607 "nvme_iov_md": false 00:08:07.607 }, 00:08:07.607 "memory_domains": [ 00:08:07.607 { 00:08:07.607 "dma_device_id": "system", 00:08:07.607 "dma_device_type": 1 00:08:07.607 } 00:08:07.607 ], 00:08:07.607 "driver_specific": { 00:08:07.607 "nvme": [ 00:08:07.607 { 00:08:07.607 "trid": { 00:08:07.607 "trtype": "TCP", 00:08:07.607 "adrfam": "IPv4", 00:08:07.607 "traddr": "10.0.0.2", 00:08:07.607 "trsvcid": "4420", 00:08:07.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:07.607 }, 00:08:07.607 "ctrlr_data": { 00:08:07.607 "cntlid": 1, 00:08:07.607 "vendor_id": "0x8086", 00:08:07.607 "model_number": "SPDK bdev Controller", 00:08:07.607 "serial_number": "SPDK0", 00:08:07.607 "firmware_revision": "24.09", 00:08:07.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.607 "oacs": { 00:08:07.607 "security": 0, 00:08:07.607 "format": 0, 00:08:07.607 "firmware": 0, 00:08:07.607 "ns_manage": 0 00:08:07.607 }, 00:08:07.607 "multi_ctrlr": true, 00:08:07.607 "ana_reporting": false 00:08:07.607 }, 00:08:07.607 "vs": { 00:08:07.607 "nvme_version": "1.3" 00:08:07.607 }, 00:08:07.607 "ns_data": { 00:08:07.607 "id": 1, 00:08:07.607 "can_share": true 00:08:07.607 } 00:08:07.607 } 00:08:07.607 ], 00:08:07.607 "mp_policy": "active_passive" 00:08:07.607 } 00:08:07.607 } 00:08:07.607 ] 00:08:07.607 17:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2697642 00:08:07.607 17:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:07.607 17:50:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.607 Running I/O for 10 seconds... 00:08:08.542 Latency(us) 00:08:08.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.542 Nvme0n1 : 1.00 13496.00 52.72 0.00 0.00 0.00 0.00 0.00 00:08:08.542 =================================================================================================================== 00:08:08.542 Total : 13496.00 52.72 0.00 0.00 0.00 0.00 0.00 00:08:08.542 00:08:09.472 17:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:09.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.730 Nvme0n1 : 2.00 13686.50 53.46 0.00 0.00 0.00 0.00 0.00 00:08:09.730 =================================================================================================================== 00:08:09.730 Total : 13686.50 53.46 0.00 0.00 0.00 0.00 0.00 00:08:09.730 00:08:09.730 true 00:08:09.730 17:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:09.731 17:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:10.016 17:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:10.016 17:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:10.016 17:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2697642 00:08:10.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.609 Nvme0n1 : 3.00 13748.00 53.70 0.00 0.00 0.00 0.00 0.00 00:08:10.609 =================================================================================================================== 00:08:10.609 Total : 13748.00 53.70 0.00 0.00 0.00 0.00 0.00 00:08:10.609 00:08:11.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.541 Nvme0n1 : 4.00 13810.25 53.95 0.00 0.00 0.00 0.00 0.00 00:08:11.541 =================================================================================================================== 00:08:11.541 Total : 13810.25 53.95 0.00 0.00 0.00 0.00 0.00 00:08:11.541 00:08:12.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.913 Nvme0n1 : 5.00 13848.60 54.10 0.00 0.00 0.00 0.00 0.00 00:08:12.913 =================================================================================================================== 00:08:12.913 Total : 13848.60 54.10 0.00 0.00 0.00 0.00 0.00 00:08:12.913 00:08:13.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.844 Nvme0n1 : 6.00 13895.67 54.28 0.00 0.00 0.00 0.00 0.00 00:08:13.844 =================================================================================================================== 00:08:13.844 Total : 13895.67 54.28 0.00 0.00 0.00 0.00 0.00 00:08:13.844 00:08:14.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.774 Nvme0n1 : 7.00 13909.43 54.33 0.00 0.00 0.00 0.00 0.00 00:08:14.775 =================================================================================================================== 00:08:14.775 Total : 13909.43 54.33 0.00 0.00 0.00 0.00 0.00 00:08:14.775 00:08:15.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.708 Nvme0n1 : 8.00 13937.12 54.44 0.00 0.00 0.00 0.00 0.00 00:08:15.708 =================================================================================================================== 00:08:15.708 Total : 13937.12 54.44 0.00 0.00 0.00 0.00 0.00 00:08:15.708 00:08:16.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.641 Nvme0n1 : 9.00 13959.33 54.53 0.00 0.00 0.00 0.00 0.00 00:08:16.641 =================================================================================================================== 00:08:16.641 Total : 13959.33 54.53 0.00 0.00 0.00 0.00 0.00 00:08:16.641 00:08:17.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.575 Nvme0n1 : 10.00 13969.70 54.57 0.00 0.00 0.00 0.00 0.00 00:08:17.575 =================================================================================================================== 00:08:17.575 Total : 13969.70 54.57 0.00 0.00 0.00 0.00 0.00 00:08:17.575 00:08:17.575 00:08:17.575 Latency(us) 00:08:17.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.575 Nvme0n1 : 10.01 13975.46 54.59 0.00 0.00 9153.86 5097.24 17087.91 00:08:17.575 =================================================================================================================== 00:08:17.575 Total : 13975.46 54.59 0.00 0.00 9153.86 5097.24 17087.91 00:08:17.575 0 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2697504 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2697504 ']' 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2697504 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2697504 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2697504' 00:08:17.575 killing process with pid 2697504 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2697504 00:08:17.575 Received shutdown signal, test time was about 10.000000 seconds 00:08:17.575 00:08:17.575 Latency(us) 00:08:17.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.575 =================================================================================================================== 00:08:17.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:17.575 17:51:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2697504 00:08:18.141 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.141 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.399 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:18.399 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:18.657 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:18.657 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:18.657 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2695001 00:08:18.657 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2695001 00:08:18.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2695001 Killed "${NVMF_APP[@]}" "$@" 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2699087 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2699087 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2699087 ']' 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.916 17:51:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 [2024-07-24 17:51:04.988578] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:18.916 [2024-07-24 17:51:04.988667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.916 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.916 [2024-07-24 17:51:05.052878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.916 [2024-07-24 17:51:05.162894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.916 [2024-07-24 17:51:05.162969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.916 [2024-07-24 17:51:05.162982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.916 [2024-07-24 17:51:05.162994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.916 [2024-07-24 17:51:05.163003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.916 [2024-07-24 17:51:05.163031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.175 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.433 [2024-07-24 17:51:05.571978] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:19.433 [2024-07-24 17:51:05.572172] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:19.433 [2024-07-24 17:51:05.572223] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:19.433 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:19.691 17:51:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1818d7c-b4dc-4bb9-820d-12fee0ca9478 -t 2000 00:08:19.950 [ 00:08:19.950 { 00:08:19.950 "name": "a1818d7c-b4dc-4bb9-820d-12fee0ca9478", 00:08:19.950 "aliases": [ 00:08:19.950 "lvs/lvol" 00:08:19.950 ], 00:08:19.950 "product_name": "Logical Volume", 00:08:19.950 "block_size": 4096, 00:08:19.950 "num_blocks": 38912, 00:08:19.950 "uuid": "a1818d7c-b4dc-4bb9-820d-12fee0ca9478", 00:08:19.950 "assigned_rate_limits": { 00:08:19.950 "rw_ios_per_sec": 0, 00:08:19.950 "rw_mbytes_per_sec": 0, 00:08:19.950 "r_mbytes_per_sec": 0, 00:08:19.950 "w_mbytes_per_sec": 0 00:08:19.950 }, 00:08:19.950 "claimed": false, 00:08:19.950 "zoned": false, 00:08:19.950 "supported_io_types": { 00:08:19.950 "read": true, 00:08:19.950 "write": true, 00:08:19.950 "unmap": true, 00:08:19.950 "flush": false, 00:08:19.950 "reset": true, 00:08:19.950 "nvme_admin": false, 00:08:19.950 "nvme_io": false, 00:08:19.950 "nvme_io_md": false, 00:08:19.950 "write_zeroes": true, 00:08:19.950 "zcopy": false, 00:08:19.950 "get_zone_info": false, 00:08:19.950 "zone_management": false, 00:08:19.950 "zone_append": false, 00:08:19.950 "compare": false, 00:08:19.950 "compare_and_write": false, 00:08:19.950 "abort": false, 00:08:19.950 "seek_hole": true, 00:08:19.950 "seek_data": true, 00:08:19.950 "copy": false, 00:08:19.950 "nvme_iov_md": false 00:08:19.950 }, 00:08:19.950 "driver_specific": { 00:08:19.950 "lvol": { 00:08:19.950 "lvol_store_uuid": "7fcb2de0-7119-408b-851e-ceb30a60702c", 00:08:19.950 "base_bdev": "aio_bdev", 00:08:19.950 "thin_provision": false, 00:08:19.950 "num_allocated_clusters": 38, 00:08:19.950 "snapshot": false, 00:08:19.950 "clone": false, 00:08:19.950 "esnap_clone": false 00:08:19.950 } 00:08:19.950 } 00:08:19.950 } 00:08:19.950 ] 00:08:19.950 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:19.950 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:19.950 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:20.208 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:20.208 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:20.208 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:20.467 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:20.467 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.726 [2024-07-24 17:51:06.849012] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:20.726 17:51:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:20.984 request: 00:08:20.984 { 00:08:20.984 "uuid": "7fcb2de0-7119-408b-851e-ceb30a60702c", 00:08:20.984 "method": "bdev_lvol_get_lvstores", 00:08:20.984 "req_id": 1 00:08:20.984 } 00:08:20.984 Got JSON-RPC error response 00:08:20.984 response: 00:08:20.984 { 00:08:20.984 "code": -19, 00:08:20.984 "message": "No such device" 00:08:20.984 } 00:08:20.984 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:20.984 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:20.984 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:20.984 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:20.984 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.242 aio_bdev 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:21.242 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.500 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1818d7c-b4dc-4bb9-820d-12fee0ca9478 -t 2000 00:08:21.758 [ 00:08:21.758 { 00:08:21.758 "name": "a1818d7c-b4dc-4bb9-820d-12fee0ca9478", 00:08:21.758 "aliases": [ 00:08:21.758 "lvs/lvol" 00:08:21.758 ], 00:08:21.758 "product_name": "Logical Volume", 00:08:21.758 "block_size": 4096, 00:08:21.758 "num_blocks": 38912, 00:08:21.758 "uuid": "a1818d7c-b4dc-4bb9-820d-12fee0ca9478", 00:08:21.758 "assigned_rate_limits": { 00:08:21.758 "rw_ios_per_sec": 0, 00:08:21.758 "rw_mbytes_per_sec": 0, 00:08:21.758 "r_mbytes_per_sec": 0, 00:08:21.758 "w_mbytes_per_sec": 0 00:08:21.758 }, 00:08:21.758 "claimed": false, 00:08:21.758 "zoned": false, 00:08:21.758 "supported_io_types": { 00:08:21.758 "read": true, 00:08:21.758 "write": true, 00:08:21.758 "unmap": true, 00:08:21.758 "flush": false, 00:08:21.758 "reset": true, 00:08:21.758 "nvme_admin": false, 00:08:21.758 "nvme_io": false, 00:08:21.758 "nvme_io_md": false, 00:08:21.758 "write_zeroes": true, 00:08:21.758 "zcopy": false, 00:08:21.758 "get_zone_info": false, 00:08:21.758 "zone_management": false, 00:08:21.758 "zone_append": false, 00:08:21.758 "compare": false, 00:08:21.758 "compare_and_write": false, 00:08:21.758 "abort": false, 00:08:21.758 "seek_hole": true, 00:08:21.758 "seek_data": true, 00:08:21.758 "copy": false, 00:08:21.758 "nvme_iov_md": false 00:08:21.758 }, 00:08:21.758 "driver_specific": { 00:08:21.758 "lvol": { 00:08:21.758 "lvol_store_uuid": "7fcb2de0-7119-408b-851e-ceb30a60702c", 00:08:21.758 "base_bdev": "aio_bdev", 00:08:21.758 "thin_provision": false, 00:08:21.758 "num_allocated_clusters": 38, 00:08:21.758 "snapshot": false, 00:08:21.758 "clone": false, 00:08:21.758 "esnap_clone": false 00:08:21.758 } 00:08:21.758 } 00:08:21.758 } 00:08:21.758 ] 00:08:21.759 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:21.759 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:21.759 17:51:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:22.017 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:22.017 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:22.017 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:22.275 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:22.275 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a1818d7c-b4dc-4bb9-820d-12fee0ca9478 00:08:22.532 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fcb2de0-7119-408b-851e-ceb30a60702c 00:08:22.790 17:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.047 00:08:23.047 real 0m19.190s 00:08:23.047 user 0m48.465s 00:08:23.047 sys 0m4.675s 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.047 ************************************ 00:08:23.047 END TEST lvs_grow_dirty 00:08:23.047 ************************************ 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:23.047 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:23.048 nvmf_trace.0 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.048 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.048 rmmod nvme_tcp 00:08:23.048 rmmod nvme_fabrics 00:08:23.048 rmmod nvme_keyring 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2699087 ']' 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2699087 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2699087 ']' 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2699087 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2699087 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2699087' 00:08:23.305 killing process with pid 2699087 00:08:23.305 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2699087 00:08:23.306 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2699087 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.564 17:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.466 00:08:25.466 real 0m41.960s 00:08:25.466 user 1m11.117s 00:08:25.466 sys 0m8.461s 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.466 ************************************ 00:08:25.466 END TEST nvmf_lvs_grow 00:08:25.466 ************************************ 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.466 ************************************ 00:08:25.466 START TEST nvmf_bdev_io_wait 00:08:25.466 ************************************ 00:08:25.466 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:25.724 * Looking for test storage... 00:08:25.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.724 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.724 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:25.724 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.725 17:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.625 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:27.626 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:27.626 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:27.626 Found net devices under 0000:09:00.0: cvl_0_0 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:27.626 Found net devices under 0000:09:00.1: cvl_0_1 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.626 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:08:27.885 00:08:27.885 --- 10.0.0.2 ping statistics --- 00:08:27.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.885 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:27.885 00:08:27.885 --- 10.0.0.1 ping statistics --- 00:08:27.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.885 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.885 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2702120 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2702120 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2702120 ']' 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.886 17:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.886 [2024-07-24 17:51:14.009610] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:27.886 [2024-07-24 17:51:14.009697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.886 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.886 [2024-07-24 17:51:14.087905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.144 [2024-07-24 17:51:14.212801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.144 [2024-07-24 17:51:14.212860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.144 [2024-07-24 17:51:14.212876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.144 [2024-07-24 17:51:14.212890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.144 [2024-07-24 17:51:14.212901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.144 [2024-07-24 17:51:14.212968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.144 [2024-07-24 17:51:14.213024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.144 [2024-07-24 17:51:14.213143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.144 [2024-07-24 17:51:14.213151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 [2024-07-24 17:51:14.359721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 Malloc0 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.144 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.145 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.145 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.424 [2024-07-24 17:51:14.418799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2702146 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2702148 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.424 { 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme$subsystem", 00:08:28.424 "trtype": "$TEST_TRANSPORT", 00:08:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "$NVMF_PORT", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.424 "hdgst": ${hdgst:-false}, 00:08:28.424 "ddgst": ${ddgst:-false} 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 } 00:08:28.424 EOF 00:08:28.424 )") 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2702150 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.424 { 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme$subsystem", 00:08:28.424 "trtype": "$TEST_TRANSPORT", 00:08:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "$NVMF_PORT", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.424 "hdgst": ${hdgst:-false}, 00:08:28.424 "ddgst": ${ddgst:-false} 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 } 00:08:28.424 EOF 00:08:28.424 )") 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2702153 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.424 { 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme$subsystem", 00:08:28.424 "trtype": "$TEST_TRANSPORT", 00:08:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "$NVMF_PORT", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.424 "hdgst": ${hdgst:-false}, 00:08:28.424 "ddgst": ${ddgst:-false} 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 } 00:08:28.424 EOF 00:08:28.424 )") 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.424 { 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme$subsystem", 00:08:28.424 "trtype": "$TEST_TRANSPORT", 00:08:28.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "$NVMF_PORT", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.424 "hdgst": ${hdgst:-false}, 00:08:28.424 "ddgst": ${ddgst:-false} 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 } 00:08:28.424 EOF 00:08:28.424 )") 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2702146 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme1", 00:08:28.424 "trtype": "tcp", 00:08:28.424 "traddr": "10.0.0.2", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "4420", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.424 "hdgst": false, 00:08:28.424 "ddgst": false 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 }' 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme1", 00:08:28.424 "trtype": "tcp", 00:08:28.424 "traddr": "10.0.0.2", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "4420", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.424 "hdgst": false, 00:08:28.424 "ddgst": false 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 }' 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme1", 00:08:28.424 "trtype": "tcp", 00:08:28.424 "traddr": "10.0.0.2", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "4420", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.424 "hdgst": false, 00:08:28.424 "ddgst": false 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 }' 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.424 17:51:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.424 "params": { 00:08:28.424 "name": "Nvme1", 00:08:28.424 "trtype": "tcp", 00:08:28.424 "traddr": "10.0.0.2", 00:08:28.424 "adrfam": "ipv4", 00:08:28.424 "trsvcid": "4420", 00:08:28.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.424 "hdgst": false, 00:08:28.424 "ddgst": false 00:08:28.424 }, 00:08:28.424 "method": "bdev_nvme_attach_controller" 00:08:28.424 }' 00:08:28.424 [2024-07-24 17:51:14.465373] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:28.424 [2024-07-24 17:51:14.465464] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:28.424 [2024-07-24 17:51:14.465491] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:28.424 [2024-07-24 17:51:14.465563] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:28.424 [2024-07-24 17:51:14.466832] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:28.424 [2024-07-24 17:51:14.466833] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:28.424 [2024-07-24 17:51:14.466906] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 17:51:14.466907] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:28.424 --proc-type=auto ] 00:08:28.424 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.424 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.425 [2024-07-24 17:51:14.648603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.696 [2024-07-24 17:51:14.751222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.696 [2024-07-24 17:51:14.754578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.696 [2024-07-24 17:51:14.832508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.696 [2024-07-24 17:51:14.858177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:28.696 [2024-07-24 17:51:14.906623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.696 [2024-07-24 17:51:14.925575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:28.954 [2024-07-24 17:51:14.997273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:28.954 Running I/O for 1 seconds... 00:08:28.954 Running I/O for 1 seconds... 00:08:28.954 Running I/O for 1 seconds... 00:08:29.212 Running I/O for 1 seconds... 00:08:30.148 00:08:30.148 Latency(us) 00:08:30.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.148 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:30.148 Nvme1n1 : 1.01 10561.90 41.26 0.00 0.00 12064.97 8009.96 19418.07 00:08:30.148 =================================================================================================================== 00:08:30.148 Total : 10561.90 41.26 0.00 0.00 12064.97 8009.96 19418.07 00:08:30.148 00:08:30.148 Latency(us) 00:08:30.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.148 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:30.148 Nvme1n1 : 1.01 8161.39 31.88 0.00 0.00 15609.93 7912.87 26408.58 00:08:30.148 =================================================================================================================== 00:08:30.148 Total : 8161.39 31.88 0.00 0.00 15609.93 7912.87 26408.58 00:08:30.148 00:08:30.148 Latency(us) 00:08:30.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.148 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:30.148 Nvme1n1 : 1.01 9297.15 36.32 0.00 0.00 13712.60 6844.87 26602.76 00:08:30.148 =================================================================================================================== 00:08:30.148 Total : 9297.15 36.32 0.00 0.00 13712.60 6844.87 26602.76 00:08:30.148 00:08:30.148 Latency(us) 00:08:30.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.148 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:30.148 Nvme1n1 : 1.00 74565.53 291.27 0.00 0.00 1707.80 277.62 6310.87 00:08:30.148 =================================================================================================================== 00:08:30.148 Total : 74565.53 291.27 0.00 0.00 1707.80 277.62 6310.87 00:08:30.406 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2702148 00:08:30.406 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2702150 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2702153 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.664 rmmod nvme_tcp 00:08:30.664 rmmod nvme_fabrics 00:08:30.664 rmmod nvme_keyring 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2702120 ']' 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2702120 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2702120 ']' 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2702120 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2702120 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2702120' 00:08:30.664 killing process with pid 2702120 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2702120 00:08:30.664 17:51:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2702120 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.922 17:51:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.827 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:32.827 00:08:32.827 real 0m7.369s 00:08:32.827 user 0m16.273s 00:08:32.827 sys 0m3.701s 00:08:32.827 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.827 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.827 ************************************ 00:08:32.827 END TEST nvmf_bdev_io_wait 00:08:32.827 ************************************ 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.086 ************************************ 00:08:33.086 START TEST nvmf_queue_depth 00:08:33.086 ************************************ 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:33.086 * Looking for test storage... 00:08:33.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.086 17:51:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:34.989 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:34.989 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:34.989 Found net devices under 0000:09:00.0: cvl_0_0 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.989 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:34.990 Found net devices under 0000:09:00.1: cvl_0_1 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.990 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.248 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:35.249 00:08:35.249 --- 10.0.0.2 ping statistics --- 00:08:35.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.249 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:08:35.249 00:08:35.249 --- 10.0.0.1 ping statistics --- 00:08:35.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.249 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2704392 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2704392 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2704392 ']' 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.249 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.249 [2024-07-24 17:51:21.379250] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:35.249 [2024-07-24 17:51:21.379331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.249 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.249 [2024-07-24 17:51:21.444542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.507 [2024-07-24 17:51:21.556075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.507 [2024-07-24 17:51:21.556148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.507 [2024-07-24 17:51:21.556164] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.507 [2024-07-24 17:51:21.556176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.507 [2024-07-24 17:51:21.556186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.507 [2024-07-24 17:51:21.556222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 [2024-07-24 17:51:21.703545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 Malloc0 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.507 [2024-07-24 17:51:21.760883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2704537 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2704537 /var/tmp/bdevperf.sock 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2704537 ']' 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.507 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.508 17:51:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.766 [2024-07-24 17:51:21.812975] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:35.766 [2024-07-24 17:51:21.813065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704537 ] 00:08:35.766 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.766 [2024-07-24 17:51:21.877940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.766 [2024-07-24 17:51:21.983508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.024 NVMe0n1 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.024 17:51:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.024 Running I/O for 10 seconds... 00:08:48.221 00:08:48.221 Latency(us) 00:08:48.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.221 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:48.221 Verification LBA range: start 0x0 length 0x4000 00:08:48.221 NVMe0n1 : 10.09 8385.63 32.76 0.00 0.00 121570.06 24466.77 77283.93 00:08:48.221 =================================================================================================================== 00:08:48.221 Total : 8385.63 32.76 0.00 0.00 121570.06 24466.77 77283.93 00:08:48.221 0 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2704537 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2704537 ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2704537 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704537 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704537' 00:08:48.221 killing process with pid 2704537 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2704537 00:08:48.221 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.221 00:08:48.221 Latency(us) 00:08:48.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.221 =================================================================================================================== 00:08:48.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2704537 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.221 rmmod nvme_tcp 00:08:48.221 rmmod nvme_fabrics 00:08:48.221 rmmod nvme_keyring 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2704392 ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2704392 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2704392 ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2704392 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704392 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704392' 00:08:48.221 killing process with pid 2704392 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2704392 00:08:48.221 17:51:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2704392 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.222 17:51:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:49.158 00:08:49.158 real 0m16.035s 00:08:49.158 user 0m22.583s 00:08:49.158 sys 0m3.004s 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.158 ************************************ 00:08:49.158 END TEST nvmf_queue_depth 00:08:49.158 ************************************ 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.158 ************************************ 00:08:49.158 START TEST nvmf_target_multipath 00:08:49.158 ************************************ 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.158 * Looking for test storage... 00:08:49.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.158 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.159 17:51:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:51.058 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.058 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:51.059 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:51.059 Found net devices under 0000:09:00.0: cvl_0_0 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:51.059 Found net devices under 0000:09:00.1: cvl_0_1 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:08:51.059 00:08:51.059 --- 10.0.0.2 ping statistics --- 00:08:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.059 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:51.059 00:08:51.059 --- 10.0.0.1 ping statistics --- 00:08:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.059 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:51.059 only one NIC for nvmf test 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:51.059 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.060 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:51.060 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.060 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.060 rmmod nvme_tcp 00:08:51.060 rmmod nvme_fabrics 00:08:51.317 rmmod nvme_keyring 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.317 17:51:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.218 00:08:53.218 real 0m4.194s 00:08:53.218 user 0m0.720s 00:08:53.218 sys 0m1.459s 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 ************************************ 00:08:53.218 END TEST nvmf_target_multipath 00:08:53.218 ************************************ 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 ************************************ 00:08:53.218 START TEST nvmf_zcopy 00:08:53.218 ************************************ 00:08:53.218 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.478 * Looking for test storage... 00:08:53.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.478 17:51:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:55.378 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:55.378 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:55.378 Found net devices under 0000:09:00.0: cvl_0_0 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:55.378 Found net devices under 0000:09:00.1: cvl_0_1 00:08:55.378 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:55.379 00:08:55.379 --- 10.0.0.2 ping statistics --- 00:08:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.379 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:08:55.379 00:08:55.379 --- 10.0.0.1 ping statistics --- 00:08:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.379 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2709601 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2709601 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2709601 ']' 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.379 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.637 [2024-07-24 17:51:41.668465] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:55.637 [2024-07-24 17:51:41.668538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.637 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.637 [2024-07-24 17:51:41.735368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.637 [2024-07-24 17:51:41.845316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.637 [2024-07-24 17:51:41.845370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.637 [2024-07-24 17:51:41.845384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.638 [2024-07-24 17:51:41.845395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.638 [2024-07-24 17:51:41.845404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.638 [2024-07-24 17:51:41.845432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.896 17:51:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 [2024-07-24 17:51:41.999603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 [2024-07-24 17:51:42.015792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.896 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.897 malloc0 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.897 { 00:08:55.897 "params": { 00:08:55.897 "name": "Nvme$subsystem", 00:08:55.897 "trtype": "$TEST_TRANSPORT", 00:08:55.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.897 "adrfam": "ipv4", 00:08:55.897 "trsvcid": "$NVMF_PORT", 00:08:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.897 "hdgst": ${hdgst:-false}, 00:08:55.897 "ddgst": ${ddgst:-false} 00:08:55.897 }, 00:08:55.897 "method": "bdev_nvme_attach_controller" 00:08:55.897 } 00:08:55.897 EOF 00:08:55.897 )") 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:55.897 17:51:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.897 "params": { 00:08:55.897 "name": "Nvme1", 00:08:55.897 "trtype": "tcp", 00:08:55.897 "traddr": "10.0.0.2", 00:08:55.897 "adrfam": "ipv4", 00:08:55.897 "trsvcid": "4420", 00:08:55.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.897 "hdgst": false, 00:08:55.897 "ddgst": false 00:08:55.897 }, 00:08:55.897 "method": "bdev_nvme_attach_controller" 00:08:55.897 }' 00:08:55.897 [2024-07-24 17:51:42.106372] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:08:55.897 [2024-07-24 17:51:42.106459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709629 ] 00:08:55.897 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.155 [2024-07-24 17:51:42.176952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.155 [2024-07-24 17:51:42.294163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.413 Running I/O for 10 seconds... 00:09:06.413 00:09:06.413 Latency(us) 00:09:06.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.413 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:06.413 Verification LBA range: start 0x0 length 0x1000 00:09:06.413 Nvme1n1 : 10.02 5155.43 40.28 0.00 0.00 24761.00 621.99 33204.91 00:09:06.413 =================================================================================================================== 00:09:06.413 Total : 5155.43 40.28 0.00 0.00 24761.00 621.99 33204.91 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2710945 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:06.671 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:06.671 { 00:09:06.672 "params": { 00:09:06.672 "name": "Nvme$subsystem", 00:09:06.672 "trtype": "$TEST_TRANSPORT", 00:09:06.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.672 "adrfam": "ipv4", 00:09:06.672 "trsvcid": "$NVMF_PORT", 00:09:06.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.672 "hdgst": ${hdgst:-false}, 00:09:06.672 "ddgst": ${ddgst:-false} 00:09:06.672 }, 00:09:06.672 "method": "bdev_nvme_attach_controller" 00:09:06.672 } 00:09:06.672 EOF 00:09:06.672 )") 00:09:06.672 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:06.672 [2024-07-24 17:51:52.928760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.672 [2024-07-24 17:51:52.928805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.672 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:06.672 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:06.672 17:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:06.672 "params": { 00:09:06.672 "name": "Nvme1", 00:09:06.672 "trtype": "tcp", 00:09:06.672 "traddr": "10.0.0.2", 00:09:06.672 "adrfam": "ipv4", 00:09:06.672 "trsvcid": "4420", 00:09:06.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.672 "hdgst": false, 00:09:06.672 "ddgst": false 00:09:06.672 }, 00:09:06.672 "method": "bdev_nvme_attach_controller" 00:09:06.672 }' 00:09:06.672 [2024-07-24 17:51:52.936741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.672 [2024-07-24 17:51:52.936770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.944747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.944773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.952758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.952780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.960778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.960800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.964738] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:09:06.930 [2024-07-24 17:51:52.964823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710945 ] 00:09:06.930 [2024-07-24 17:51:52.968797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.968824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.976819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.976839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.984839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:52.984859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:52.992862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.930 [2024-07-24 17:51:52.992882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.000900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.000924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.008923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.008949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.016946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.016970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.024968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.024992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.026772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.930 [2024-07-24 17:51:53.033010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.033041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.041035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.041072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.930 [2024-07-24 17:51:53.049032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.930 [2024-07-24 17:51:53.049057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.057055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.057079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.065076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.065100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.073099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.073132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.081128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.081164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.089177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.089206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.097202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.097232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.105200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.105221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.113219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.113247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.121238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.121259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.129264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.129286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.137285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.137308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.144701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.931 [2024-07-24 17:51:53.145305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.145327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.153326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.153347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.161368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.161423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.169413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.169446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.177427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.177478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.185470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.185509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.931 [2024-07-24 17:51:53.193494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.931 [2024-07-24 17:51:53.193532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.201531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.201574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.209513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.209543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.217533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.217568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.225571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.225606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.233596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.233633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.241616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.241642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.249615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.249641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.257636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.257661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.265667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.265696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.273686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.273713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.281709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.281737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.289730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.289758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.297754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.297781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.305775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.305803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.313795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.313821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.321825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.321856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.329839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.329865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 Running I/O for 5 seconds... 00:09:07.190 [2024-07-24 17:51:53.337862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.337886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.351929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.351960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.363793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.363824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.375810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.375840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.387690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.387721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.399256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.399286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.410597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.410627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.422479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.422509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.433958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.433988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.445360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.445388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.190 [2024-07-24 17:51:53.458834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.190 [2024-07-24 17:51:53.458865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.469799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.469830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.481528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.481559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.492829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.492861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.503808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.503839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.515288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.515316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.526416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.526456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.537864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.537894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.549263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.549291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.560548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.560578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.571914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.571944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.585045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.585075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.596049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.596086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.607778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.607809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.619004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.619034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.630199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.630226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.641649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.641678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.653043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.653073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.664395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.664445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.675633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.675663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.686932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.686961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.698253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.698281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.449 [2024-07-24 17:51:53.709677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.449 [2024-07-24 17:51:53.709706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.721747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.721778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.733539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.733569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.746738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.746768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.757123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.757168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.768661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.768690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.780155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.780183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.791692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.791722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.803696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.803727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.814974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.815004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.826342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.826370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.838350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.838377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.849523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.849553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.862700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.862730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.873218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.873245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.884583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.884622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.897679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.897710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.908161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.908189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.919469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.919499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.930826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.930856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.942208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.942235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.953551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.953581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.707 [2024-07-24 17:51:53.965147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.707 [2024-07-24 17:51:53.965178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:53.977297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:53.977326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:53.989876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:53.989907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.001221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.001249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.012520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.012550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.023736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.023766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.034665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.034696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.045993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.046023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.057117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.057162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.068196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.068224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.079666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.079696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.090654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.090684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.103751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.103793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.114481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.114512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.125869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.125900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.136990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.137021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.148178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.148206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.161244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.161271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.171255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.171282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.182707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.182737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.194800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.194829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.206119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.206168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.217648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.217676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.966 [2024-07-24 17:51:54.229278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.966 [2024-07-24 17:51:54.229305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.240791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.240822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.252020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.252050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.265109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.265160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.275407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.275451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.286877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.286907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.297950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.297980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.309726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.309756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.321016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.321053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.332609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.332639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.343963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.343994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.357064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.357094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.367032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.367062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.378760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.378790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.390159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.390186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.401906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.401935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.413388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.413433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.425044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.425074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.436708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.436738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.448340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.448368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.459518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.459548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.471000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.471029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.225 [2024-07-24 17:51:54.483034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.225 [2024-07-24 17:51:54.483064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.494962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.494993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.506075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.506113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.517566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.517596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.529236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.529264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.540710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.540740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.552390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.552417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.564021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.564051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.575824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.575854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.587692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.587722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.599355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.599382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.612619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.612650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.623666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.623697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.635301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.635329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.646414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.646445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.657873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.657903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.669594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.669625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.680950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.680979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.693963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.693994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.704631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.704661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.716210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.716238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.727874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.727905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.740842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.740873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.484 [2024-07-24 17:51:54.751657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.484 [2024-07-24 17:51:54.751689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.763273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.763301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.774847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.774877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.786568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.786598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.798225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.798253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.809660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.809690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.821068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.821098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.832697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.832728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.844548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.844578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.855748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.855778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.866777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.866807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.879707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.879737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.889907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.889937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.901315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.901342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.912610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.912640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.924231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.924262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.743 [2024-07-24 17:51:54.935387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.743 [2024-07-24 17:51:54.935414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:54.946758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:54.946788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:54.958233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:54.958260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:54.969202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:54.969229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:54.982075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:54.982115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:54.992906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:54.992936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.744 [2024-07-24 17:51:55.004303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.744 [2024-07-24 17:51:55.004330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.018188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.018217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.029117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.029148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.040366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.040395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.051844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.051874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.062933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.062963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.075916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.075947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.086270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.086299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.098176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.098207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.109637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.109667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.121113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.121166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.132538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.132569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.143999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.144030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.157737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.157769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.002 [2024-07-24 17:51:55.168810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.002 [2024-07-24 17:51:55.168840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.180298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.180325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.191697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.191727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.203182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.203209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.214737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.214766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.225971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.226000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.237529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.237559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.249413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.249457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.003 [2024-07-24 17:51:55.261246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.003 [2024-07-24 17:51:55.261273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.272710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.272740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.284380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.284408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.297204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.297231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.307991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.308020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.319978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.320008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.331438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.331469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.342949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.342979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.354379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.354406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.366171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.366199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.377516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.377546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.388912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.388942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.399853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.399882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.261 [2024-07-24 17:51:55.411441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.261 [2024-07-24 17:51:55.411478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.423055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.423085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.436398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.436440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.447269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.447296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.459002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.459032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.471283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.471310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.482818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.482848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.495857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.495888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.506377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.506404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.517874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.517904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.262 [2024-07-24 17:51:55.529527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.262 [2024-07-24 17:51:55.529558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.542827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.542858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.553339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.553366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.564664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.564694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.575832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.575862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.587271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.587299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.600133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.600178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.610074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.610112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.621078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.621118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.632233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.632267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.643307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.643334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.656297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.656324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.667227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.520 [2024-07-24 17:51:55.667257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.520 [2024-07-24 17:51:55.678484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.678511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.691092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.691131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.700921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.700952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.712771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.712801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.724034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.724063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.735549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.735579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.746753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.746784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.758465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.758497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.769388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.769415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.521 [2024-07-24 17:51:55.780506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.521 [2024-07-24 17:51:55.780537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.792813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.792845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.804396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.804441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.815803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.815834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.827252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.827279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.838557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.838587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.850050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.850089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.861199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.861227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.873064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.873094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.884212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.884249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.895716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.895746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.907150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.907177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.918407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.918451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.929818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.929848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.941356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.941383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.954356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.954400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.965195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.965222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.779 [2024-07-24 17:51:55.976060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.779 [2024-07-24 17:51:55.976090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:55.987314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:55.987342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:55.998568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:55.998598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:56.009897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:56.009927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:56.021555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:56.021585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:56.033024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:56.033054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.780 [2024-07-24 17:51:56.044729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.780 [2024-07-24 17:51:56.044760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.056623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.056654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.067868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.067905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.079254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.079281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.090656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.090686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.103852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.103882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.114586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.114617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.125672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.125703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.136726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.136756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.148270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.148297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.159534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.159565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.171019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.171049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.182373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.182407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.193709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.193739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.204970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.205000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.216282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.216309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.229325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.229353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.239966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.038 [2024-07-24 17:51:56.239996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.038 [2024-07-24 17:51:56.251228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.039 [2024-07-24 17:51:56.251256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.039 [2024-07-24 17:51:56.264046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.039 [2024-07-24 17:51:56.264075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.039 [2024-07-24 17:51:56.274048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.039 [2024-07-24 17:51:56.274078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.039 [2024-07-24 17:51:56.286251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.039 [2024-07-24 17:51:56.286278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.039 [2024-07-24 17:51:56.297919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.039 [2024-07-24 17:51:56.297949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.309612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.309652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.320933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.320964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.332807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.332838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.344210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.344238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.357347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.357374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.367931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.367962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.379290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.379317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.390253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.390280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.401928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.401958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.297 [2024-07-24 17:51:56.413121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.297 [2024-07-24 17:51:56.413164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.424309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.424337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.435428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.435459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.448579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.448609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.458901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.458931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.470213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.470241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.483185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.483212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.493845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.493875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.505114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.505144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.518256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.518283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.529340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.529368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.541342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.541370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.552683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.552713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.298 [2024-07-24 17:51:56.564272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.298 [2024-07-24 17:51:56.564302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.556 [2024-07-24 17:51:56.576174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.556 [2024-07-24 17:51:56.576211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.556 [2024-07-24 17:51:56.587699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.556 [2024-07-24 17:51:56.587729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.556 [2024-07-24 17:51:56.600931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.556 [2024-07-24 17:51:56.600960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.556 [2024-07-24 17:51:56.611821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.556 [2024-07-24 17:51:56.611852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.556 [2024-07-24 17:51:56.623250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.623278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.634207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.634235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.645142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.645186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.658201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.658229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.668517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.668547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.680045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.680075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.693412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.693442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.704382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.704410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.715603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.715633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.729018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.729049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.739487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.739518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.750231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.750259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.761553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.761584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.772689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.772719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.785989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.786020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.796534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.796578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.807826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.807856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.557 [2024-07-24 17:51:56.820861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.557 [2024-07-24 17:51:56.820898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.815 [2024-07-24 17:51:56.831844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.831875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.843100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.843139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.854732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.854762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.866223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.866251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.877726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.877757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.890945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.890977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.901783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.901814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.913562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.913592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.924915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.924945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.938132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.938175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.948771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.948801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.960227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.960255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.971585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.971615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.982167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.982194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:56.993364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:56.993391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.004510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.004539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.015679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.015708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.028939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.028969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.040019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.040049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.051375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.051418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.062373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.062400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.816 [2024-07-24 17:51:57.073469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.816 [2024-07-24 17:51:57.073499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.074 [2024-07-24 17:51:57.085253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.074 [2024-07-24 17:51:57.085281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.074 [2024-07-24 17:51:57.097144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.074 [2024-07-24 17:51:57.097172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.074 [2024-07-24 17:51:57.108782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.074 [2024-07-24 17:51:57.108812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.120016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.120047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.131617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.131647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.142460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.142487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.153981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.154019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.165422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.165467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.176520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.176551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.188155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.188192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.199949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.199979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.211254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.211280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.224277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.224304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.235220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.235248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.246888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.246919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.258520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.258550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.270012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.270041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.281280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.281308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.294766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.294795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.305459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.305490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.316434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.316465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.327991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.328021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.075 [2024-07-24 17:51:57.339826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.075 [2024-07-24 17:51:57.339857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.351545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.351576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.364773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.364803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.375880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.375922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.387258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.387285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.400534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.400565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.411171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.411198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.422586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.422617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.434397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.434438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.446187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.446215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.457816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.457846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.469427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.469458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.481271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.481298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.492786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.492816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.504301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.504328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.517041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.517070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.527005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.527035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.539238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.539265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.550844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.550874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.561987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.562017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.573567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.573598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.584770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.584800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.334 [2024-07-24 17:51:57.596550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.334 [2024-07-24 17:51:57.596588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.592 [2024-07-24 17:51:57.608747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.592 [2024-07-24 17:51:57.608778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.592 [2024-07-24 17:51:57.620071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.592 [2024-07-24 17:51:57.620109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.592 [2024-07-24 17:51:57.631232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.631260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.642525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.642555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.655649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.655680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.666155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.666182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.677815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.677846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.689344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.689371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.702906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.702936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.714451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.714481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.725910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.725939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.737261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.737288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.750651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.750681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.761602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.761632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.773676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.773705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.785821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.785850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.797075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.797113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.808535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.808566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.819752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.819790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.833191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.833220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.844005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.844036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.593 [2024-07-24 17:51:57.855335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.593 [2024-07-24 17:51:57.855362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.851 [2024-07-24 17:51:57.868833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.851 [2024-07-24 17:51:57.868865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.851 [2024-07-24 17:51:57.879471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.851 [2024-07-24 17:51:57.879501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.891313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.891341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.902849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.902878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.914655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.914686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.925854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.925884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.939053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.939083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.949289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.949316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.960421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.960451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.977592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.977624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.988282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.988311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:57.999796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:57.999827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.011220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.011248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.022924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.022954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.034454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.034485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.045979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.046010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.057460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.057490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.068899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.068930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.080201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.080230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.091592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.091622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.102987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.103018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.852 [2024-07-24 17:51:58.115165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.852 [2024-07-24 17:51:58.115192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.127580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.127611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.139058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.139088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.152360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.152387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.162635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.162666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.174347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.174375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.185461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.185492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.197745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.197776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.208431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.208457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.219665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.219695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.110 [2024-07-24 17:51:58.232691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.110 [2024-07-24 17:51:58.232721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.243076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.243118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.254465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.254495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.265439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.265470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.276517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.276547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.287919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.287949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.298948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.298978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.310063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.310096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.321750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.321780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.333034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.333064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.344097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.344150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.111 [2024-07-24 17:51:58.354019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.111 [2024-07-24 17:51:58.354049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.396345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.396387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 00:09:12.369 Latency(us) 00:09:12.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.369 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:12.369 Nvme1n1 : 5.05 11056.60 86.38 0.00 0.00 11466.15 5097.24 52428.80 00:09:12.369 =================================================================================================================== 00:09:12.369 Total : 11056.60 86.38 0.00 0.00 11466.15 5097.24 52428.80 00:09:12.369 [2024-07-24 17:51:58.402834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.402863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.410854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.410883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.418856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.418879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.426923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.426965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.434948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.434994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.442963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.443004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.450986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.451030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.459015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.459060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.467034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.467077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.475053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.475097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.483089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.483143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.491109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.491162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.499123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.499166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.507147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.507191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.515175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.515217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.523195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.523236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.531218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.531255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.539203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.539224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.547221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.547243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.555240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.555261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.563262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.563283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.571321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.571358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.579347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.579389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.587348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.587402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.595349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.595396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.603370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.603406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.611409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.611430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.619415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.619436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.627486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.627528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.369 [2024-07-24 17:51:58.635512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.369 [2024-07-24 17:51:58.635562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.627 [2024-07-24 17:51:58.643520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.627 [2024-07-24 17:51:58.643553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.627 [2024-07-24 17:51:58.651528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.627 [2024-07-24 17:51:58.651554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.627 [2024-07-24 17:51:58.659550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.627 [2024-07-24 17:51:58.659576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2710945) - No such process 00:09:12.627 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2710945 00:09:12.627 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.627 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.627 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.627 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.628 delay0 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.628 17:51:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:12.628 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.628 [2024-07-24 17:51:58.782221] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:20.734 Initializing NVMe Controllers 00:09:20.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:20.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:20.734 Initialization complete. Launching workers. 00:09:20.734 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 19520 00:09:20.734 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19655, failed to submit 109 00:09:20.734 success 19557, unsuccess 98, failed 0 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.734 rmmod nvme_tcp 00:09:20.734 rmmod nvme_fabrics 00:09:20.734 rmmod nvme_keyring 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2709601 ']' 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2709601 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2709601 ']' 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2709601 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2709601 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2709601' 00:09:20.734 killing process with pid 2709601 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2709601 00:09:20.734 17:52:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2709601 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.734 17:52:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.110 00:09:22.110 real 0m28.830s 00:09:22.110 user 0m40.017s 00:09:22.110 sys 0m10.664s 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 ************************************ 00:09:22.110 END TEST nvmf_zcopy 00:09:22.110 ************************************ 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.110 ************************************ 00:09:22.110 START TEST nvmf_nmic 00:09:22.110 ************************************ 00:09:22.110 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:22.368 * Looking for test storage... 00:09:22.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.368 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.369 17:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:24.270 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:24.270 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:24.270 Found net devices under 0000:09:00.0: cvl_0_0 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:24.270 Found net devices under 0000:09:00.1: cvl_0_1 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.270 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.271 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:09:24.529 00:09:24.529 --- 10.0.0.2 ping statistics --- 00:09:24.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.529 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:09:24.529 00:09:24.529 --- 10.0.0.1 ping statistics --- 00:09:24.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.529 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2714436 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2714436 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2714436 ']' 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.529 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.530 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.530 17:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.530 [2024-07-24 17:52:10.665440] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:09:24.530 [2024-07-24 17:52:10.665531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.530 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.530 [2024-07-24 17:52:10.735250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.787 [2024-07-24 17:52:10.859728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.787 [2024-07-24 17:52:10.859792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.787 [2024-07-24 17:52:10.859810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.787 [2024-07-24 17:52:10.859823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.787 [2024-07-24 17:52:10.859835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.787 [2024-07-24 17:52:10.859924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.787 [2024-07-24 17:52:10.859981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.787 [2024-07-24 17:52:10.860159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.787 [2024-07-24 17:52:10.860163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.351 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.351 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:25.351 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.351 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.351 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 [2024-07-24 17:52:11.628827] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 Malloc0 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 [2024-07-24 17:52:11.681016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:25.609 test case1: single bdev can't be used in multiple subsystems 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 [2024-07-24 17:52:11.704894] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:25.609 [2024-07-24 17:52:11.704923] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:25.609 [2024-07-24 17:52:11.704937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.609 request: 00:09:25.609 { 00:09:25.609 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:25.609 "namespace": { 00:09:25.609 "bdev_name": "Malloc0", 00:09:25.609 "no_auto_visible": false 00:09:25.609 }, 00:09:25.609 "method": "nvmf_subsystem_add_ns", 00:09:25.609 "req_id": 1 00:09:25.609 } 00:09:25.609 Got JSON-RPC error response 00:09:25.609 response: 00:09:25.609 { 00:09:25.609 "code": -32602, 00:09:25.609 "message": "Invalid parameters" 00:09:25.609 } 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:25.609 Adding namespace failed - expected result. 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:25.609 test case2: host connect to nvmf target in multiple paths 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:25.609 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.610 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.610 [2024-07-24 17:52:11.713007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:25.610 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.610 17:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.175 17:52:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:26.740 17:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.740 17:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:09:26.740 17:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.740 17:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:09:26.740 17:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:09:29.317 17:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.317 [global] 00:09:29.317 thread=1 00:09:29.317 invalidate=1 00:09:29.317 rw=write 00:09:29.317 time_based=1 00:09:29.317 runtime=1 00:09:29.317 ioengine=libaio 00:09:29.317 direct=1 00:09:29.317 bs=4096 00:09:29.317 iodepth=1 00:09:29.317 norandommap=0 00:09:29.317 numjobs=1 00:09:29.317 00:09:29.317 verify_dump=1 00:09:29.317 verify_backlog=512 00:09:29.317 verify_state_save=0 00:09:29.317 do_verify=1 00:09:29.317 verify=crc32c-intel 00:09:29.317 [job0] 00:09:29.317 filename=/dev/nvme0n1 00:09:29.317 Could not set queue depth (nvme0n1) 00:09:29.317 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.317 fio-3.35 00:09:29.317 Starting 1 thread 00:09:30.250 00:09:30.250 job0: (groupid=0, jobs=1): err= 0: pid=2715117: Wed Jul 24 17:52:16 2024 00:09:30.250 read: IOPS=33, BW=136KiB/s (139kB/s)(136KiB/1001msec) 00:09:30.250 slat (nsec): min=6718, max=32466, avg=25474.24, stdev=9117.47 00:09:30.250 clat (usec): min=261, max=41246, avg=26625.83, stdev=19725.71 00:09:30.250 lat (usec): min=273, max=41272, avg=26651.30, stdev=19728.56 00:09:30.250 clat percentiles (usec): 00:09:30.250 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 334], 00:09:30.250 | 30.00th=[ 359], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:30.250 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:30.250 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:30.250 | 99.99th=[41157] 00:09:30.250 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:30.250 slat (nsec): min=5635, max=28569, avg=6911.57, stdev=2458.35 00:09:30.250 clat (usec): min=157, max=308, avg=174.64, stdev=10.92 00:09:30.250 lat (usec): min=163, max=336, avg=181.55, stdev=11.75 00:09:30.250 clat percentiles (usec): 00:09:30.250 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:09:30.250 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:09:30.250 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:09:30.250 | 99.00th=[ 200], 99.50th=[ 202], 99.90th=[ 310], 99.95th=[ 310], 00:09:30.250 | 99.99th=[ 310] 00:09:30.251 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.251 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.251 lat (usec) : 250=93.59%, 500=2.38% 00:09:30.251 lat (msec) : 50=4.03% 00:09:30.251 cpu : usr=0.10%, sys=0.40%, ctx=546, majf=0, minf=2 00:09:30.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.251 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.251 00:09:30.251 Run status group 0 (all jobs): 00:09:30.251 READ: bw=136KiB/s (139kB/s), 136KiB/s-136KiB/s (139kB/s-139kB/s), io=136KiB (139kB), run=1001-1001msec 00:09:30.251 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:30.251 00:09:30.251 Disk stats (read/write): 00:09:30.251 nvme0n1: ios=69/512, merge=0/0, ticks=813/85, in_queue=898, util=91.88% 00:09:30.251 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.509 rmmod nvme_tcp 00:09:30.509 rmmod nvme_fabrics 00:09:30.509 rmmod nvme_keyring 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2714436 ']' 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2714436 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2714436 ']' 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2714436 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2714436 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2714436' 00:09:30.509 killing process with pid 2714436 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2714436 00:09:30.509 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2714436 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.769 17:52:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.305 00:09:33.305 real 0m10.627s 00:09:33.305 user 0m25.164s 00:09:33.305 sys 0m2.310s 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.305 ************************************ 00:09:33.305 END TEST nvmf_nmic 00:09:33.305 ************************************ 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.305 17:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.305 ************************************ 00:09:33.305 START TEST nvmf_fio_target 00:09:33.305 ************************************ 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.305 * Looking for test storage... 00:09:33.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.305 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.306 17:52:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:35.209 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:35.209 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.209 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:35.210 Found net devices under 0000:09:00.0: cvl_0_0 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:35.210 Found net devices under 0000:09:00.1: cvl_0_1 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:09:35.210 00:09:35.210 --- 10.0.0.2 ping statistics --- 00:09:35.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.210 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:35.210 00:09:35.210 --- 10.0.0.1 ping statistics --- 00:09:35.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.210 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2717193 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2717193 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2717193 ']' 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.210 17:52:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.210 [2024-07-24 17:52:21.340168] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:09:35.210 [2024-07-24 17:52:21.340246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.210 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.210 [2024-07-24 17:52:21.409494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.468 [2024-07-24 17:52:21.533844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.468 [2024-07-24 17:52:21.533902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.468 [2024-07-24 17:52:21.533918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.468 [2024-07-24 17:52:21.533931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.468 [2024-07-24 17:52:21.533943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.468 [2024-07-24 17:52:21.534001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.468 [2024-07-24 17:52:21.534052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.468 [2024-07-24 17:52:21.534074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.468 [2024-07-24 17:52:21.534077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.403 [2024-07-24 17:52:22.624721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.403 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.661 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:36.661 17:52:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.241 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:37.241 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.505 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:37.505 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.763 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:37.763 17:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.020 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.278 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.278 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.536 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.536 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.794 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:38.794 17:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.051 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.309 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.309 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.567 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.567 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.824 17:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.083 [2024-07-24 17:52:26.114870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.083 17:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.340 17:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.597 17:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:09:41.162 17:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:09:43.060 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:09:43.318 17:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.318 [global] 00:09:43.318 thread=1 00:09:43.318 invalidate=1 00:09:43.318 rw=write 00:09:43.318 time_based=1 00:09:43.318 runtime=1 00:09:43.318 ioengine=libaio 00:09:43.318 direct=1 00:09:43.318 bs=4096 00:09:43.318 iodepth=1 00:09:43.318 norandommap=0 00:09:43.318 numjobs=1 00:09:43.318 00:09:43.318 verify_dump=1 00:09:43.318 verify_backlog=512 00:09:43.318 verify_state_save=0 00:09:43.318 do_verify=1 00:09:43.318 verify=crc32c-intel 00:09:43.318 [job0] 00:09:43.318 filename=/dev/nvme0n1 00:09:43.318 [job1] 00:09:43.318 filename=/dev/nvme0n2 00:09:43.318 [job2] 00:09:43.318 filename=/dev/nvme0n3 00:09:43.318 [job3] 00:09:43.318 filename=/dev/nvme0n4 00:09:43.318 Could not set queue depth (nvme0n1) 00:09:43.318 Could not set queue depth (nvme0n2) 00:09:43.318 Could not set queue depth (nvme0n3) 00:09:43.318 Could not set queue depth (nvme0n4) 00:09:43.575 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.575 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.575 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.575 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.575 fio-3.35 00:09:43.575 Starting 4 threads 00:09:44.947 00:09:44.947 job0: (groupid=0, jobs=1): err= 0: pid=2718285: Wed Jul 24 17:52:30 2024 00:09:44.947 read: IOPS=70, BW=284KiB/s (291kB/s)(288KiB/1015msec) 00:09:44.947 slat (nsec): min=6597, max=35833, avg=19814.29, stdev=9062.95 00:09:44.947 clat (usec): min=296, max=41964, avg=12299.20, stdev=18667.58 00:09:44.947 lat (usec): min=303, max=42000, avg=12319.02, stdev=18673.95 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 297], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[ 379], 00:09:44.947 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 461], 00:09:44.947 | 70.00th=[ 816], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:44.947 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:44.947 | 99.99th=[42206] 00:09:44.947 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:44.947 slat (nsec): min=6718, max=49467, avg=11539.82, stdev=5353.49 00:09:44.947 clat (usec): min=181, max=474, avg=234.47, stdev=38.80 00:09:44.947 lat (usec): min=189, max=484, avg=246.01, stdev=39.88 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:09:44.947 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:09:44.947 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 310], 00:09:44.947 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 474], 99.95th=[ 474], 00:09:44.947 | 99.99th=[ 474] 00:09:44.947 bw ( KiB/s): min= 4096, max= 4096, per=25.38%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.947 lat (usec) : 250=74.49%, 500=21.23%, 750=0.51%, 1000=0.17% 00:09:44.947 lat (msec) : 50=3.60% 00:09:44.947 cpu : usr=0.59%, sys=0.49%, ctx=586, majf=0, minf=2 00:09:44.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.947 job1: (groupid=0, jobs=1): err= 0: pid=2718289: Wed Jul 24 17:52:30 2024 00:09:44.947 read: IOPS=1171, BW=4687KiB/s (4800kB/s)(4692KiB/1001msec) 00:09:44.947 slat (nsec): min=5247, max=56394, avg=24824.82, stdev=10211.33 00:09:44.947 clat (usec): min=250, max=41115, avg=452.76, stdev=1193.18 00:09:44.947 lat (usec): min=259, max=41124, avg=477.58, stdev=1192.93 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 355], 00:09:44.947 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 429], 00:09:44.947 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 502], 95.00th=[ 553], 00:09:44.947 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[ 3195], 99.95th=[41157], 00:09:44.947 | 99.99th=[41157] 00:09:44.947 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.947 slat (nsec): min=6614, max=66737, avg=17405.83, stdev=9643.94 00:09:44.947 clat (usec): min=171, max=1719, avg=258.27, stdev=81.24 00:09:44.947 lat (usec): min=184, max=1727, avg=275.68, stdev=83.21 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 217], 00:09:44.947 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:09:44.947 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 371], 00:09:44.947 | 99.00th=[ 449], 99.50th=[ 486], 99.90th=[ 1565], 99.95th=[ 1713], 00:09:44.947 | 99.99th=[ 1713] 00:09:44.947 bw ( KiB/s): min= 7256, max= 7256, per=44.95%, avg=7256.00, stdev= 0.00, samples=1 00:09:44.947 iops : min= 1814, max= 1814, avg=1814.00, stdev= 0.00, samples=1 00:09:44.947 lat (usec) : 250=35.22%, 500=59.87%, 750=4.65%, 1000=0.04% 00:09:44.947 lat (msec) : 2=0.15%, 4=0.04%, 50=0.04% 00:09:44.947 cpu : usr=3.10%, sys=5.90%, ctx=2710, majf=0, minf=1 00:09:44.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 issued rwts: total=1173,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.947 job2: (groupid=0, jobs=1): err= 0: pid=2718290: Wed Jul 24 17:52:30 2024 00:09:44.947 read: IOPS=1283, BW=5135KiB/s (5258kB/s)(5140KiB/1001msec) 00:09:44.947 slat (nsec): min=5211, max=53407, avg=18482.07, stdev=6424.32 00:09:44.947 clat (usec): min=272, max=3084, avg=402.81, stdev=98.93 00:09:44.947 lat (usec): min=279, max=3102, avg=421.29, stdev=100.41 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 330], 20.00th=[ 367], 00:09:44.947 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 388], 60.00th=[ 396], 00:09:44.947 | 70.00th=[ 404], 80.00th=[ 433], 90.00th=[ 490], 95.00th=[ 537], 00:09:44.947 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 3097], 00:09:44.947 | 99.99th=[ 3097] 00:09:44.947 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:44.947 slat (nsec): min=7381, max=64170, avg=18981.65, stdev=8971.52 00:09:44.947 clat (usec): min=176, max=1532, avg=270.18, stdev=86.16 00:09:44.947 lat (usec): min=184, max=1541, avg=289.16, stdev=89.49 00:09:44.947 clat percentiles (usec): 00:09:44.947 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:09:44.947 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 255], 00:09:44.947 | 70.00th=[ 293], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 424], 00:09:44.947 | 99.00th=[ 498], 99.50th=[ 586], 99.90th=[ 783], 99.95th=[ 1532], 00:09:44.947 | 99.99th=[ 1532] 00:09:44.947 bw ( KiB/s): min= 5720, max= 5720, per=35.44%, avg=5720.00, stdev= 0.00, samples=1 00:09:44.947 iops : min= 1430, max= 1430, avg=1430.00, stdev= 0.00, samples=1 00:09:44.947 lat (usec) : 250=31.51%, 500=63.95%, 750=4.43%, 1000=0.04% 00:09:44.947 lat (msec) : 2=0.04%, 4=0.04% 00:09:44.947 cpu : usr=3.70%, sys=6.80%, ctx=2822, majf=0, minf=1 00:09:44.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.947 issued rwts: total=1285,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.947 job3: (groupid=0, jobs=1): err= 0: pid=2718291: Wed Jul 24 17:52:30 2024 00:09:44.947 read: IOPS=139, BW=559KiB/s (573kB/s)(560KiB/1001msec) 00:09:44.947 slat (nsec): min=6756, max=33762, avg=22431.50, stdev=9593.69 00:09:44.947 clat (usec): min=266, max=41921, avg=5896.33, stdev=13976.16 00:09:44.948 lat (usec): min=273, max=41954, avg=5918.76, stdev=13978.30 00:09:44.948 clat percentiles (usec): 00:09:44.948 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 302], 00:09:44.948 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 343], 00:09:44.948 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[41157], 95.00th=[41157], 00:09:44.948 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:44.948 | 99.99th=[41681] 00:09:44.948 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:44.948 slat (nsec): min=7189, max=56197, avg=16659.57, stdev=8023.12 00:09:44.948 clat (usec): min=184, max=1501, avg=313.93, stdev=116.44 00:09:44.948 lat (usec): min=200, max=1512, avg=330.59, stdev=119.89 00:09:44.948 clat percentiles (usec): 00:09:44.948 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 219], 00:09:44.948 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 262], 60.00th=[ 330], 00:09:44.948 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 478], 00:09:44.948 | 99.00th=[ 570], 99.50th=[ 701], 99.90th=[ 1500], 99.95th=[ 1500], 00:09:44.948 | 99.99th=[ 1500] 00:09:44.948 bw ( KiB/s): min= 4096, max= 4096, per=25.38%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.948 lat (usec) : 250=34.51%, 500=60.58%, 750=1.53%, 1000=0.15% 00:09:44.948 lat (msec) : 2=0.15%, 10=0.15%, 50=2.91% 00:09:44.948 cpu : usr=0.70%, sys=1.00%, ctx=654, majf=0, minf=1 00:09:44.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.948 issued rwts: total=140,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.948 00:09:44.948 Run status group 0 (all jobs): 00:09:44.948 READ: bw=10.3MiB/s (10.8MB/s), 284KiB/s-5135KiB/s (291kB/s-5258kB/s), io=10.4MiB (10.9MB), run=1001-1015msec 00:09:44.948 WRITE: bw=15.8MiB/s (16.5MB/s), 2018KiB/s-6138KiB/s (2066kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1015msec 00:09:44.948 00:09:44.948 Disk stats (read/write): 00:09:44.948 nvme0n1: ios=119/512, merge=0/0, ticks=993/119, in_queue=1112, util=97.60% 00:09:44.948 nvme0n2: ios=1044/1199, merge=0/0, ticks=491/285, in_queue=776, util=86.66% 00:09:44.948 nvme0n3: ios=1082/1264, merge=0/0, ticks=702/331, in_queue=1033, util=97.69% 00:09:44.948 nvme0n4: ios=191/512, merge=0/0, ticks=1099/150, in_queue=1249, util=97.67% 00:09:44.948 17:52:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.948 [global] 00:09:44.948 thread=1 00:09:44.948 invalidate=1 00:09:44.948 rw=randwrite 00:09:44.948 time_based=1 00:09:44.948 runtime=1 00:09:44.948 ioengine=libaio 00:09:44.948 direct=1 00:09:44.948 bs=4096 00:09:44.948 iodepth=1 00:09:44.948 norandommap=0 00:09:44.948 numjobs=1 00:09:44.948 00:09:44.948 verify_dump=1 00:09:44.948 verify_backlog=512 00:09:44.948 verify_state_save=0 00:09:44.948 do_verify=1 00:09:44.948 verify=crc32c-intel 00:09:44.948 [job0] 00:09:44.948 filename=/dev/nvme0n1 00:09:44.948 [job1] 00:09:44.948 filename=/dev/nvme0n2 00:09:44.948 [job2] 00:09:44.948 filename=/dev/nvme0n3 00:09:44.948 [job3] 00:09:44.948 filename=/dev/nvme0n4 00:09:44.948 Could not set queue depth (nvme0n1) 00:09:44.948 Could not set queue depth (nvme0n2) 00:09:44.948 Could not set queue depth (nvme0n3) 00:09:44.948 Could not set queue depth (nvme0n4) 00:09:44.948 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.948 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.948 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.948 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.948 fio-3.35 00:09:44.948 Starting 4 threads 00:09:46.318 00:09:46.318 job0: (groupid=0, jobs=1): err= 0: pid=2718633: Wed Jul 24 17:52:32 2024 00:09:46.318 read: IOPS=19, BW=79.6KiB/s (81.5kB/s)(80.0KiB/1005msec) 00:09:46.318 slat (nsec): min=15322, max=35601, avg=28564.25, stdev=8891.10 00:09:46.318 clat (usec): min=40900, max=42026, avg=41127.20, stdev=385.23 00:09:46.318 lat (usec): min=40936, max=42062, avg=41155.76, stdev=384.54 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:46.318 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:46.318 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:46.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.318 | 99.99th=[42206] 00:09:46.318 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:46.318 slat (nsec): min=10069, max=69183, avg=26597.35, stdev=8835.91 00:09:46.318 clat (usec): min=201, max=3754, avg=320.33, stdev=179.21 00:09:46.318 lat (usec): min=222, max=3778, avg=346.92, stdev=181.51 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 231], 00:09:46.318 | 30.00th=[ 241], 40.00th=[ 262], 50.00th=[ 285], 60.00th=[ 322], 00:09:46.318 | 70.00th=[ 371], 80.00th=[ 416], 90.00th=[ 437], 95.00th=[ 449], 00:09:46.318 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 3752], 99.95th=[ 3752], 00:09:46.318 | 99.99th=[ 3752] 00:09:46.318 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.318 lat (usec) : 250=34.21%, 500=60.53%, 750=1.13% 00:09:46.318 lat (msec) : 2=0.19%, 4=0.19%, 50=3.76% 00:09:46.318 cpu : usr=0.90%, sys=1.79%, ctx=533, majf=0, minf=1 00:09:46.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.318 job1: (groupid=0, jobs=1): err= 0: pid=2718634: Wed Jul 24 17:52:32 2024 00:09:46.318 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:09:46.318 slat (nsec): min=15278, max=33963, avg=24509.71, stdev=8898.51 00:09:46.318 clat (usec): min=40896, max=42055, avg=41490.98, stdev=523.84 00:09:46.318 lat (usec): min=40929, max=42070, avg=41515.49, stdev=521.42 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:46.318 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:46.318 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:46.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.318 | 99.99th=[42206] 00:09:46.318 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:46.318 slat (nsec): min=7238, max=46746, avg=19844.12, stdev=7924.28 00:09:46.318 clat (usec): min=225, max=498, avg=272.82, stdev=37.63 00:09:46.318 lat (usec): min=241, max=517, avg=292.67, stdev=39.37 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:09:46.318 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:09:46.318 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 347], 00:09:46.318 | 99.00th=[ 404], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 498], 00:09:46.318 | 99.99th=[ 498] 00:09:46.318 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.318 lat (usec) : 250=28.89%, 500=67.17% 00:09:46.318 lat (msec) : 50=3.94% 00:09:46.318 cpu : usr=1.07%, sys=0.49%, ctx=533, majf=0, minf=1 00:09:46.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.318 job2: (groupid=0, jobs=1): err= 0: pid=2718636: Wed Jul 24 17:52:32 2024 00:09:46.318 read: IOPS=488, BW=1955KiB/s (2002kB/s)(2020KiB/1033msec) 00:09:46.318 slat (nsec): min=15154, max=51820, avg=18265.08, stdev=4030.54 00:09:46.318 clat (usec): min=292, max=41934, avg=1695.33, stdev=7360.40 00:09:46.318 lat (usec): min=315, max=41969, avg=1713.59, stdev=7361.78 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 310], 00:09:46.318 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:09:46.318 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 343], 95.00th=[ 367], 00:09:46.318 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:46.318 | 99.99th=[41681] 00:09:46.318 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:46.318 slat (nsec): min=10061, max=68140, avg=24417.55, stdev=7815.01 00:09:46.318 clat (usec): min=224, max=480, avg=289.48, stdev=57.46 00:09:46.318 lat (usec): min=242, max=518, avg=313.90, stdev=60.17 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 247], 00:09:46.318 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:09:46.318 | 70.00th=[ 297], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 420], 00:09:46.318 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 482], 99.95th=[ 482], 00:09:46.318 | 99.99th=[ 482] 00:09:46.318 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.318 lat (usec) : 250=12.29%, 500=85.94%, 750=0.10% 00:09:46.318 lat (msec) : 50=1.67% 00:09:46.318 cpu : usr=1.55%, sys=2.81%, ctx=1017, majf=0, minf=2 00:09:46.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.318 issued rwts: total=505,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.318 job3: (groupid=0, jobs=1): err= 0: pid=2718637: Wed Jul 24 17:52:32 2024 00:09:46.318 read: IOPS=41, BW=167KiB/s (171kB/s)(168KiB/1008msec) 00:09:46.318 slat (nsec): min=6377, max=34160, avg=21003.57, stdev=9608.43 00:09:46.318 clat (usec): min=324, max=41992, avg=20529.27, stdev=20462.98 00:09:46.318 lat (usec): min=336, max=42026, avg=20550.27, stdev=20469.82 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 351], 00:09:46.318 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[41157], 00:09:46.318 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:46.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:46.318 | 99.99th=[42206] 00:09:46.318 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:46.318 slat (nsec): min=8077, max=57484, avg=21684.66, stdev=8865.56 00:09:46.318 clat (usec): min=182, max=489, avg=254.07, stdev=53.03 00:09:46.318 lat (usec): min=205, max=507, avg=275.76, stdev=54.50 00:09:46.318 clat percentiles (usec): 00:09:46.318 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:09:46.318 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:09:46.318 | 70.00th=[ 249], 80.00th=[ 289], 90.00th=[ 347], 95.00th=[ 375], 00:09:46.319 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 490], 99.95th=[ 490], 00:09:46.319 | 99.99th=[ 490] 00:09:46.319 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:09:46.319 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:46.319 lat (usec) : 250=65.52%, 500=30.69% 00:09:46.319 lat (msec) : 50=3.79% 00:09:46.319 cpu : usr=0.99%, sys=0.79%, ctx=555, majf=0, minf=1 00:09:46.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.319 issued rwts: total=42,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.319 00:09:46.319 Run status group 0 (all jobs): 00:09:46.319 READ: bw=2277KiB/s (2332kB/s), 79.6KiB/s-1955KiB/s (81.5kB/s-2002kB/s), io=2352KiB (2408kB), run=1005-1033msec 00:09:46.319 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2038KiB/s (2030kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1033msec 00:09:46.319 00:09:46.319 Disk stats (read/write): 00:09:46.319 nvme0n1: ios=68/512, merge=0/0, ticks=1549/152, in_queue=1701, util=97.39% 00:09:46.319 nvme0n2: ios=41/512, merge=0/0, ticks=689/140, in_queue=829, util=86.86% 00:09:46.319 nvme0n3: ios=500/512, merge=0/0, ticks=645/139, in_queue=784, util=88.77% 00:09:46.319 nvme0n4: ios=74/512, merge=0/0, ticks=951/122, in_queue=1073, util=97.46% 00:09:46.319 17:52:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:46.319 [global] 00:09:46.319 thread=1 00:09:46.319 invalidate=1 00:09:46.319 rw=write 00:09:46.319 time_based=1 00:09:46.319 runtime=1 00:09:46.319 ioengine=libaio 00:09:46.319 direct=1 00:09:46.319 bs=4096 00:09:46.319 iodepth=128 00:09:46.319 norandommap=0 00:09:46.319 numjobs=1 00:09:46.319 00:09:46.319 verify_dump=1 00:09:46.319 verify_backlog=512 00:09:46.319 verify_state_save=0 00:09:46.319 do_verify=1 00:09:46.319 verify=crc32c-intel 00:09:46.319 [job0] 00:09:46.319 filename=/dev/nvme0n1 00:09:46.319 [job1] 00:09:46.319 filename=/dev/nvme0n2 00:09:46.319 [job2] 00:09:46.319 filename=/dev/nvme0n3 00:09:46.319 [job3] 00:09:46.319 filename=/dev/nvme0n4 00:09:46.319 Could not set queue depth (nvme0n1) 00:09:46.319 Could not set queue depth (nvme0n2) 00:09:46.319 Could not set queue depth (nvme0n3) 00:09:46.319 Could not set queue depth (nvme0n4) 00:09:46.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.319 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.319 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.319 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.319 fio-3.35 00:09:46.319 Starting 4 threads 00:09:47.694 00:09:47.694 job0: (groupid=0, jobs=1): err= 0: pid=2718861: Wed Jul 24 17:52:33 2024 00:09:47.694 read: IOPS=3758, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1004msec) 00:09:47.694 slat (usec): min=2, max=31941, avg=143.96, stdev=1198.73 00:09:47.694 clat (usec): min=934, max=96304, avg=18692.90, stdev=17241.98 00:09:47.694 lat (usec): min=3398, max=96308, avg=18836.87, stdev=17346.40 00:09:47.694 clat percentiles (usec): 00:09:47.694 | 1.00th=[ 4015], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[11469], 00:09:47.694 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[13173], 00:09:47.694 | 70.00th=[14484], 80.00th=[19006], 90.00th=[38536], 95.00th=[56361], 00:09:47.694 | 99.00th=[91751], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:09:47.694 | 99.99th=[95945] 00:09:47.694 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:47.694 slat (usec): min=3, max=13440, avg=90.04, stdev=702.04 00:09:47.694 clat (usec): min=889, max=38160, avg=13904.10, stdev=5698.78 00:09:47.694 lat (usec): min=895, max=38166, avg=13994.13, stdev=5744.54 00:09:47.694 clat percentiles (usec): 00:09:47.694 | 1.00th=[ 4228], 5.00th=[ 5866], 10.00th=[ 7832], 20.00th=[10421], 00:09:47.694 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:09:47.694 | 70.00th=[14091], 80.00th=[16188], 90.00th=[22676], 95.00th=[28705], 00:09:47.694 | 99.00th=[31065], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:09:47.694 | 99.99th=[38011] 00:09:47.694 bw ( KiB/s): min=12288, max=20480, per=24.14%, avg=16384.00, stdev=5792.62, samples=2 00:09:47.694 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:09:47.694 lat (usec) : 1000=0.10% 00:09:47.694 lat (msec) : 4=0.71%, 10=12.38%, 20=73.68%, 50=9.50%, 100=3.62% 00:09:47.694 cpu : usr=4.39%, sys=5.28%, ctx=246, majf=0, minf=5 00:09:47.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:47.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.694 issued rwts: total=3774,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.694 job1: (groupid=0, jobs=1): err= 0: pid=2718862: Wed Jul 24 17:52:33 2024 00:09:47.694 read: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1005msec) 00:09:47.694 slat (usec): min=3, max=6016, avg=88.00, stdev=488.46 00:09:47.694 clat (usec): min=1620, max=23823, avg=11375.78, stdev=2101.78 00:09:47.694 lat (usec): min=5092, max=23847, avg=11463.77, stdev=2139.55 00:09:47.694 clat percentiles (usec): 00:09:47.694 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:09:47.694 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:09:47.694 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13435], 95.00th=[14746], 00:09:47.694 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20317], 99.95th=[22938], 00:09:47.694 | 99.99th=[23725] 00:09:47.694 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:47.694 slat (usec): min=4, max=17892, avg=102.91, stdev=670.07 00:09:47.694 clat (msec): min=4, max=107, avg=13.41, stdev=12.51 00:09:47.694 lat (msec): min=5, max=107, avg=13.51, stdev=12.60 00:09:47.694 clat percentiles (msec): 00:09:47.694 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:09:47.694 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:09:47.694 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 14], 95.00th=[ 19], 00:09:47.694 | 99.00th=[ 82], 99.50th=[ 92], 99.90th=[ 108], 99.95th=[ 108], 00:09:47.694 | 99.99th=[ 108] 00:09:47.694 bw ( KiB/s): min=16872, max=24088, per=30.17%, avg=20480.00, stdev=5102.48, samples=2 00:09:47.694 iops : min= 4218, max= 6022, avg=5120.00, stdev=1275.62, samples=2 00:09:47.694 lat (msec) : 2=0.01%, 10=16.61%, 20=80.54%, 50=1.09%, 100=1.53% 00:09:47.694 lat (msec) : 250=0.22% 00:09:47.694 cpu : usr=6.08%, sys=7.77%, ctx=456, majf=0, minf=17 00:09:47.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.694 issued rwts: total=4915,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.694 job2: (groupid=0, jobs=1): err= 0: pid=2718863: Wed Jul 24 17:52:33 2024 00:09:47.694 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:09:47.695 slat (usec): min=3, max=15239, avg=127.98, stdev=885.51 00:09:47.695 clat (usec): min=1250, max=57654, avg=16377.95, stdev=7341.27 00:09:47.695 lat (usec): min=1268, max=57672, avg=16505.93, stdev=7396.52 00:09:47.695 clat percentiles (usec): 00:09:47.695 | 1.00th=[ 2835], 5.00th=[ 4752], 10.00th=[10814], 20.00th=[14484], 00:09:47.695 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[16450], 00:09:47.695 | 70.00th=[16909], 80.00th=[18220], 90.00th=[21365], 95.00th=[24773], 00:09:47.695 | 99.00th=[53216], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:09:47.695 | 99.99th=[57410] 00:09:47.695 write: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.1MiB/1012msec); 0 zone resets 00:09:47.695 slat (usec): min=4, max=13515, avg=127.17, stdev=625.97 00:09:47.695 clat (usec): min=3224, max=49798, avg=17994.86, stdev=7463.25 00:09:47.695 lat (usec): min=3231, max=49819, avg=18122.02, stdev=7516.29 00:09:47.695 clat percentiles (usec): 00:09:47.695 | 1.00th=[ 5800], 5.00th=[ 8979], 10.00th=[12256], 20.00th=[13435], 00:09:47.695 | 30.00th=[14353], 40.00th=[15008], 50.00th=[15795], 60.00th=[16057], 00:09:47.695 | 70.00th=[17433], 80.00th=[23462], 90.00th=[31851], 95.00th=[32900], 00:09:47.695 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.695 | 99.99th=[49546] 00:09:47.695 bw ( KiB/s): min=13504, max=16376, per=22.01%, avg=14940.00, stdev=2030.81, samples=2 00:09:47.695 iops : min= 3376, max= 4094, avg=3735.00, stdev=507.70, samples=2 00:09:47.695 lat (msec) : 2=0.39%, 4=1.80%, 10=5.57%, 20=72.91%, 50=18.41% 00:09:47.695 lat (msec) : 100=0.91% 00:09:47.695 cpu : usr=5.04%, sys=7.72%, ctx=456, majf=0, minf=17 00:09:47.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:47.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.695 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.695 job3: (groupid=0, jobs=1): err= 0: pid=2718864: Wed Jul 24 17:52:33 2024 00:09:47.695 read: IOPS=3895, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1003msec) 00:09:47.695 slat (usec): min=2, max=9011, avg=120.35, stdev=629.21 00:09:47.695 clat (usec): min=517, max=31097, avg=15578.89, stdev=4129.76 00:09:47.695 lat (usec): min=3584, max=33146, avg=15699.24, stdev=4132.29 00:09:47.695 clat percentiles (usec): 00:09:47.695 | 1.00th=[ 7308], 5.00th=[11076], 10.00th=[11600], 20.00th=[13042], 00:09:47.695 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:09:47.695 | 70.00th=[15795], 80.00th=[18744], 90.00th=[21365], 95.00th=[25035], 00:09:47.695 | 99.00th=[27919], 99.50th=[28967], 99.90th=[31065], 99.95th=[31065], 00:09:47.695 | 99.99th=[31065] 00:09:47.695 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:47.695 slat (usec): min=3, max=11816, avg=120.91, stdev=612.79 00:09:47.695 clat (usec): min=994, max=40020, avg=16187.31, stdev=6855.56 00:09:47.695 lat (usec): min=1000, max=40033, avg=16308.22, stdev=6893.95 00:09:47.695 clat percentiles (usec): 00:09:47.695 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[10683], 20.00th=[12256], 00:09:47.695 | 30.00th=[12649], 40.00th=[13435], 50.00th=[13829], 60.00th=[14091], 00:09:47.695 | 70.00th=[15926], 80.00th=[18482], 90.00th=[29230], 95.00th=[32375], 00:09:47.695 | 99.00th=[35914], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:09:47.695 | 99.99th=[40109] 00:09:47.695 bw ( KiB/s): min=15864, max=16904, per=24.14%, avg=16384.00, stdev=735.39, samples=2 00:09:47.695 iops : min= 3966, max= 4226, avg=4096.00, stdev=183.85, samples=2 00:09:47.695 lat (usec) : 750=0.01%, 1000=0.02% 00:09:47.695 lat (msec) : 4=0.40%, 10=4.74%, 20=77.18%, 50=17.64% 00:09:47.695 cpu : usr=5.69%, sys=6.19%, ctx=473, majf=0, minf=11 00:09:47.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:47.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.695 issued rwts: total=3907,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.695 00:09:47.695 Run status group 0 (all jobs): 00:09:47.695 READ: bw=62.5MiB/s (65.5MB/s), 13.8MiB/s-19.1MiB/s (14.5MB/s-20.0MB/s), io=63.2MiB (66.3MB), run=1003-1012msec 00:09:47.695 WRITE: bw=66.3MiB/s (69.5MB/s), 14.9MiB/s-19.9MiB/s (15.6MB/s-20.9MB/s), io=67.1MiB (70.3MB), run=1003-1012msec 00:09:47.695 00:09:47.695 Disk stats (read/write): 00:09:47.695 nvme0n1: ios=3114/3239, merge=0/0, ticks=27836/26756, in_queue=54592, util=86.07% 00:09:47.695 nvme0n2: ios=4141/4267, merge=0/0, ticks=23070/24039, in_queue=47109, util=97.66% 00:09:47.695 nvme0n3: ios=3054/3072, merge=0/0, ticks=46420/52286, in_queue=98706, util=98.12% 00:09:47.695 nvme0n4: ios=3124/3537, merge=0/0, ticks=15460/18520, in_queue=33980, util=97.69% 00:09:47.695 17:52:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.695 [global] 00:09:47.695 thread=1 00:09:47.695 invalidate=1 00:09:47.695 rw=randwrite 00:09:47.695 time_based=1 00:09:47.695 runtime=1 00:09:47.695 ioengine=libaio 00:09:47.695 direct=1 00:09:47.695 bs=4096 00:09:47.695 iodepth=128 00:09:47.695 norandommap=0 00:09:47.695 numjobs=1 00:09:47.695 00:09:47.695 verify_dump=1 00:09:47.695 verify_backlog=512 00:09:47.695 verify_state_save=0 00:09:47.695 do_verify=1 00:09:47.695 verify=crc32c-intel 00:09:47.695 [job0] 00:09:47.695 filename=/dev/nvme0n1 00:09:47.695 [job1] 00:09:47.695 filename=/dev/nvme0n2 00:09:47.695 [job2] 00:09:47.695 filename=/dev/nvme0n3 00:09:47.695 [job3] 00:09:47.695 filename=/dev/nvme0n4 00:09:47.695 Could not set queue depth (nvme0n1) 00:09:47.695 Could not set queue depth (nvme0n2) 00:09:47.695 Could not set queue depth (nvme0n3) 00:09:47.695 Could not set queue depth (nvme0n4) 00:09:47.953 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.953 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.953 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.953 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.953 fio-3.35 00:09:47.953 Starting 4 threads 00:09:49.331 00:09:49.331 job0: (groupid=0, jobs=1): err= 0: pid=2719096: Wed Jul 24 17:52:35 2024 00:09:49.331 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:09:49.331 slat (usec): min=3, max=17154, avg=117.69, stdev=869.54 00:09:49.331 clat (usec): min=6503, max=36239, avg=14954.24, stdev=5240.83 00:09:49.331 lat (usec): min=6512, max=36282, avg=15071.93, stdev=5301.92 00:09:49.331 clat percentiles (usec): 00:09:49.331 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9765], 00:09:49.331 | 30.00th=[11863], 40.00th=[12256], 50.00th=[14353], 60.00th=[15401], 00:09:49.331 | 70.00th=[16909], 80.00th=[19006], 90.00th=[21627], 95.00th=[25035], 00:09:49.331 | 99.00th=[30278], 99.50th=[33162], 99.90th=[35914], 99.95th=[35914], 00:09:49.331 | 99.99th=[36439] 00:09:49.331 write: IOPS=4236, BW=16.5MiB/s (17.4MB/s)(16.7MiB/1008msec); 0 zone resets 00:09:49.331 slat (usec): min=4, max=14051, avg=111.18, stdev=681.40 00:09:49.331 clat (usec): min=3339, max=39059, avg=15515.79, stdev=7140.64 00:09:49.331 lat (usec): min=3659, max=39068, avg=15626.97, stdev=7183.78 00:09:49.331 clat percentiles (usec): 00:09:49.331 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 8029], 20.00th=[ 9503], 00:09:49.331 | 30.00th=[10683], 40.00th=[12256], 50.00th=[14091], 60.00th=[16057], 00:09:49.331 | 70.00th=[18220], 80.00th=[21890], 90.00th=[24773], 95.00th=[30540], 00:09:49.331 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:09:49.331 | 99.99th=[39060] 00:09:49.331 bw ( KiB/s): min=15880, max=17264, per=27.02%, avg=16572.00, stdev=978.64, samples=2 00:09:49.331 iops : min= 3970, max= 4316, avg=4143.00, stdev=244.66, samples=2 00:09:49.331 lat (msec) : 4=0.08%, 10=22.57%, 20=58.00%, 50=19.35% 00:09:49.331 cpu : usr=4.57%, sys=8.44%, ctx=379, majf=0, minf=1 00:09:49.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:49.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.331 issued rwts: total=4096,4270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.331 job1: (groupid=0, jobs=1): err= 0: pid=2719097: Wed Jul 24 17:52:35 2024 00:09:49.331 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:49.331 slat (usec): min=2, max=11884, avg=75.36, stdev=595.19 00:09:49.331 clat (usec): min=1683, max=28199, avg=12536.80, stdev=4728.63 00:09:49.331 lat (usec): min=1692, max=29297, avg=12612.16, stdev=4765.78 00:09:49.331 clat percentiles (usec): 00:09:49.331 | 1.00th=[ 2278], 5.00th=[ 6325], 10.00th=[ 8094], 20.00th=[ 9110], 00:09:49.331 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[11863], 00:09:49.331 | 70.00th=[14222], 80.00th=[17695], 90.00th=[19006], 95.00th=[21627], 00:09:49.331 | 99.00th=[26608], 99.50th=[26608], 99.90th=[27919], 99.95th=[27919], 00:09:49.331 | 99.99th=[28181] 00:09:49.331 write: IOPS=5053, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:09:49.331 slat (usec): min=3, max=12858, avg=95.16, stdev=660.71 00:09:49.331 clat (usec): min=635, max=44277, avg=13719.75, stdev=7708.08 00:09:49.331 lat (usec): min=666, max=44287, avg=13814.91, stdev=7762.24 00:09:49.332 clat percentiles (usec): 00:09:49.332 | 1.00th=[ 1012], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6980], 00:09:49.332 | 30.00th=[ 7832], 40.00th=[10028], 50.00th=[11207], 60.00th=[13566], 00:09:49.332 | 70.00th=[16581], 80.00th=[21103], 90.00th=[25560], 95.00th=[27657], 00:09:49.332 | 99.00th=[34341], 99.50th=[35390], 99.90th=[44303], 99.95th=[44303], 00:09:49.332 | 99.99th=[44303] 00:09:49.332 bw ( KiB/s): min=15056, max=24440, per=32.20%, avg=19748.00, stdev=6635.49, samples=2 00:09:49.332 iops : min= 3764, max= 6110, avg=4937.00, stdev=1658.87, samples=2 00:09:49.332 lat (usec) : 750=0.04%, 1000=0.45% 00:09:49.332 lat (msec) : 2=0.59%, 4=0.48%, 10=34.57%, 20=47.62%, 50=16.24% 00:09:49.332 cpu : usr=4.19%, sys=6.59%, ctx=332, majf=0, minf=1 00:09:49.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:49.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.332 issued rwts: total=4608,5064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.332 job2: (groupid=0, jobs=1): err= 0: pid=2719098: Wed Jul 24 17:52:35 2024 00:09:49.332 read: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1002msec) 00:09:49.332 slat (usec): min=2, max=15595, avg=126.50, stdev=722.97 00:09:49.332 clat (usec): min=722, max=34936, avg=16334.50, stdev=5860.29 00:09:49.332 lat (usec): min=3739, max=35853, avg=16461.00, stdev=5906.52 00:09:49.332 clat percentiles (usec): 00:09:49.332 | 1.00th=[ 4490], 5.00th=[10814], 10.00th=[11469], 20.00th=[12387], 00:09:49.332 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13829], 60.00th=[14877], 00:09:49.332 | 70.00th=[17433], 80.00th=[20317], 90.00th=[26870], 95.00th=[30278], 00:09:49.332 | 99.00th=[32375], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:09:49.332 | 99.99th=[34866] 00:09:49.332 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:49.332 slat (usec): min=4, max=20124, avg=143.24, stdev=938.05 00:09:49.332 clat (usec): min=7220, max=53274, avg=19128.78, stdev=8045.05 00:09:49.332 lat (usec): min=7231, max=53308, avg=19272.02, stdev=8128.17 00:09:49.332 clat percentiles (usec): 00:09:49.332 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:09:49.332 | 30.00th=[12387], 40.00th=[13698], 50.00th=[15270], 60.00th=[18220], 00:09:49.332 | 70.00th=[24511], 80.00th=[27657], 90.00th=[31589], 95.00th=[34341], 00:09:49.332 | 99.00th=[36963], 99.50th=[42206], 99.90th=[42206], 99.95th=[49021], 00:09:49.332 | 99.99th=[53216] 00:09:49.332 bw ( KiB/s): min=12288, max=16384, per=23.38%, avg=14336.00, stdev=2896.31, samples=2 00:09:49.332 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:49.332 lat (usec) : 750=0.01% 00:09:49.332 lat (msec) : 4=0.18%, 10=2.00%, 20=68.80%, 50=28.99%, 100=0.01% 00:09:49.332 cpu : usr=4.20%, sys=7.59%, ctx=304, majf=0, minf=1 00:09:49.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:49.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.332 issued rwts: total=3561,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.332 job3: (groupid=0, jobs=1): err= 0: pid=2719099: Wed Jul 24 17:52:35 2024 00:09:49.332 read: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(11.4MiB/1043msec) 00:09:49.332 slat (usec): min=2, max=39470, avg=159.77, stdev=1320.25 00:09:49.332 clat (msec): min=6, max=109, avg=21.48, stdev=16.58 00:09:49.332 lat (msec): min=6, max=109, avg=21.64, stdev=16.66 00:09:49.332 clat percentiles (msec): 00:09:49.332 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:09:49.332 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:09:49.332 | 70.00th=[ 19], 80.00th=[ 25], 90.00th=[ 47], 95.00th=[ 61], 00:09:49.332 | 99.00th=[ 87], 99.50th=[ 88], 99.90th=[ 92], 99.95th=[ 92], 00:09:49.332 | 99.99th=[ 110] 00:09:49.332 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:09:49.332 slat (usec): min=3, max=28949, avg=167.56, stdev=1169.20 00:09:49.332 clat (usec): min=4755, max=86294, avg=22530.59, stdev=16174.40 00:09:49.332 lat (usec): min=4761, max=86301, avg=22698.15, stdev=16288.12 00:09:49.332 clat percentiles (usec): 00:09:49.332 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[12256], 00:09:49.332 | 30.00th=[13042], 40.00th=[13960], 50.00th=[16712], 60.00th=[19268], 00:09:49.332 | 70.00th=[23725], 80.00th=[31851], 90.00th=[42730], 95.00th=[54789], 00:09:49.332 | 99.00th=[83362], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:09:49.332 | 99.99th=[86508] 00:09:49.332 bw ( KiB/s): min=12288, max=12288, per=20.04%, avg=12288.00, stdev= 0.00, samples=2 00:09:49.332 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:49.332 lat (msec) : 10=6.06%, 20=61.22%, 50=25.25%, 100=7.46%, 250=0.02% 00:09:49.332 cpu : usr=2.11%, sys=3.65%, ctx=273, majf=0, minf=1 00:09:49.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:49.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.332 issued rwts: total=2921,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.332 00:09:49.332 Run status group 0 (all jobs): 00:09:49.332 READ: bw=56.9MiB/s (59.6MB/s), 10.9MiB/s-18.0MiB/s (11.5MB/s-18.8MB/s), io=59.3MiB (62.2MB), run=1002-1043msec 00:09:49.332 WRITE: bw=59.9MiB/s (62.8MB/s), 11.5MiB/s-19.7MiB/s (12.1MB/s-20.7MB/s), io=62.5MiB (65.5MB), run=1002-1043msec 00:09:49.332 00:09:49.332 Disk stats (read/write): 00:09:49.332 nvme0n1: ios=3609/3727, merge=0/0, ticks=51354/52241, in_queue=103595, util=95.99% 00:09:49.332 nvme0n2: ios=3966/4096, merge=0/0, ticks=37312/35529, in_queue=72841, util=95.43% 00:09:49.332 nvme0n3: ios=2643/3072, merge=0/0, ticks=18054/23439, in_queue=41493, util=88.83% 00:09:49.332 nvme0n4: ios=2313/2560, merge=0/0, ticks=19658/25243, in_queue=44901, util=88.11% 00:09:49.332 17:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:49.332 17:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2719237 00:09:49.332 17:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:49.332 17:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:49.332 [global] 00:09:49.332 thread=1 00:09:49.332 invalidate=1 00:09:49.332 rw=read 00:09:49.332 time_based=1 00:09:49.332 runtime=10 00:09:49.332 ioengine=libaio 00:09:49.332 direct=1 00:09:49.332 bs=4096 00:09:49.332 iodepth=1 00:09:49.332 norandommap=1 00:09:49.332 numjobs=1 00:09:49.332 00:09:49.332 [job0] 00:09:49.332 filename=/dev/nvme0n1 00:09:49.332 [job1] 00:09:49.332 filename=/dev/nvme0n2 00:09:49.332 [job2] 00:09:49.332 filename=/dev/nvme0n3 00:09:49.332 [job3] 00:09:49.332 filename=/dev/nvme0n4 00:09:49.332 Could not set queue depth (nvme0n1) 00:09:49.332 Could not set queue depth (nvme0n2) 00:09:49.332 Could not set queue depth (nvme0n3) 00:09:49.332 Could not set queue depth (nvme0n4) 00:09:49.332 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.332 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.332 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.332 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.332 fio-3.35 00:09:49.332 Starting 4 threads 00:09:52.005 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.574 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.574 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30146560, buflen=4096 00:09:52.574 fio: pid=2719394, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.574 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.574 17:52:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.574 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=26263552, buflen=4096 00:09:52.574 fio: pid=2719381, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.143 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11476992, buflen=4096 00:09:53.143 fio: pid=2719335, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.143 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.143 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:53.143 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=30314496, buflen=4096 00:09:53.143 fio: pid=2719345, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:53.143 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.143 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:53.143 00:09:53.143 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2719335: Wed Jul 24 17:52:39 2024 00:09:53.143 read: IOPS=805, BW=3222KiB/s (3299kB/s)(10.9MiB/3479msec) 00:09:53.143 slat (usec): min=5, max=28326, avg=31.66, stdev=629.29 00:09:53.143 clat (usec): min=273, max=42325, avg=1198.11, stdev=5756.52 00:09:53.143 lat (usec): min=282, max=42338, avg=1227.66, stdev=5789.03 00:09:53.143 clat percentiles (usec): 00:09:53.143 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:09:53.143 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 383], 00:09:53.143 | 70.00th=[ 408], 80.00th=[ 445], 90.00th=[ 490], 95.00th=[ 537], 00:09:53.143 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:53.143 | 99.99th=[42206] 00:09:53.143 bw ( KiB/s): min= 104, max= 7144, per=14.22%, avg=3645.33, stdev=2951.74, samples=6 00:09:53.143 iops : min= 26, max= 1786, avg=911.33, stdev=737.94, samples=6 00:09:53.143 lat (usec) : 500=91.40%, 750=6.39%, 1000=0.14% 00:09:53.143 lat (msec) : 2=0.04%, 50=2.00% 00:09:53.143 cpu : usr=0.81%, sys=1.61%, ctx=2809, majf=0, minf=1 00:09:53.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 issued rwts: total=2803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.143 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2719345: Wed Jul 24 17:52:39 2024 00:09:53.143 read: IOPS=1978, BW=7911KiB/s (8101kB/s)(28.9MiB/3742msec) 00:09:53.143 slat (usec): min=4, max=28077, avg=26.98, stdev=456.69 00:09:53.143 clat (usec): min=242, max=41967, avg=470.86, stdev=2374.29 00:09:53.143 lat (usec): min=248, max=42000, avg=497.84, stdev=2418.16 00:09:53.143 clat percentiles (usec): 00:09:53.143 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 285], 00:09:53.143 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 334], 00:09:53.143 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 441], 00:09:53.143 | 99.00th=[ 570], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41681], 00:09:53.143 | 99.99th=[42206] 00:09:53.143 bw ( KiB/s): min= 2704, max=11392, per=30.17%, avg=7731.14, stdev=3523.20, samples=7 00:09:53.143 iops : min= 676, max= 2848, avg=1932.71, stdev=880.73, samples=7 00:09:53.143 lat (usec) : 250=1.09%, 500=96.62%, 750=1.88%, 1000=0.01% 00:09:53.143 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.34% 00:09:53.143 cpu : usr=1.42%, sys=3.42%, ctx=7410, majf=0, minf=1 00:09:53.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 issued rwts: total=7402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.143 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2719381: Wed Jul 24 17:52:39 2024 00:09:53.143 read: IOPS=2000, BW=8002KiB/s (8195kB/s)(25.0MiB/3205msec) 00:09:53.143 slat (usec): min=4, max=15488, avg=23.29, stdev=211.34 00:09:53.143 clat (usec): min=277, max=41485, avg=468.20, stdev=1713.36 00:09:53.143 lat (usec): min=282, max=41498, avg=491.48, stdev=1726.52 00:09:53.143 clat percentiles (usec): 00:09:53.143 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 326], 00:09:53.143 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 396], 00:09:53.143 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 519], 95.00th=[ 562], 00:09:53.143 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[41157], 99.95th=[41157], 00:09:53.143 | 99.99th=[41681] 00:09:53.143 bw ( KiB/s): min= 4408, max=11712, per=31.72%, avg=8128.00, stdev=2508.62, samples=6 00:09:53.143 iops : min= 1102, max= 2928, avg=2032.00, stdev=627.15, samples=6 00:09:53.143 lat (usec) : 500=87.32%, 750=12.41%, 1000=0.05% 00:09:53.143 lat (msec) : 2=0.02%, 50=0.19% 00:09:53.143 cpu : usr=1.94%, sys=4.59%, ctx=6417, majf=0, minf=1 00:09:53.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 issued rwts: total=6413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.143 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2719394: Wed Jul 24 17:52:39 2024 00:09:53.143 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(28.8MiB/2925msec) 00:09:53.143 slat (nsec): min=6113, max=58511, avg=13383.48, stdev=5672.70 00:09:53.143 clat (usec): min=307, max=40567, avg=376.83, stdev=469.55 00:09:53.143 lat (usec): min=317, max=40585, avg=390.22, stdev=469.79 00:09:53.143 clat percentiles (usec): 00:09:53.143 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:09:53.143 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:09:53.143 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 412], 00:09:53.143 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 791], 99.95th=[ 914], 00:09:53.143 | 99.99th=[40633] 00:09:53.143 bw ( KiB/s): min= 9096, max=10512, per=39.32%, avg=10076.80, stdev=569.14, samples=5 00:09:53.143 iops : min= 2274, max= 2628, avg=2519.20, stdev=142.29, samples=5 00:09:53.143 lat (usec) : 500=99.77%, 750=0.11%, 1000=0.07% 00:09:53.143 lat (msec) : 2=0.03%, 50=0.01% 00:09:53.143 cpu : usr=2.77%, sys=4.65%, ctx=7361, majf=0, minf=1 00:09:53.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.143 issued rwts: total=7361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.143 00:09:53.143 Run status group 0 (all jobs): 00:09:53.143 READ: bw=25.0MiB/s (26.2MB/s), 3222KiB/s-9.83MiB/s (3299kB/s-10.3MB/s), io=93.7MiB (98.2MB), run=2925-3742msec 00:09:53.143 00:09:53.143 Disk stats (read/write): 00:09:53.143 nvme0n1: ios=2842/0, merge=0/0, ticks=3387/0, in_queue=3387, util=98.63% 00:09:53.143 nvme0n2: ios=7016/0, merge=0/0, ticks=3330/0, in_queue=3330, util=94.24% 00:09:53.143 nvme0n3: ios=6245/0, merge=0/0, ticks=4002/0, in_queue=4002, util=98.97% 00:09:53.143 nvme0n4: ios=7218/0, merge=0/0, ticks=2663/0, in_queue=2663, util=96.74% 00:09:53.401 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.401 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:53.659 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.659 17:52:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:53.917 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.917 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:54.175 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.175 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:54.433 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:54.433 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2719237 00:09:54.433 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:54.433 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:54.691 nvmf hotplug test: fio failed as expected 00:09:54.691 17:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.950 rmmod nvme_tcp 00:09:54.950 rmmod nvme_fabrics 00:09:54.950 rmmod nvme_keyring 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2717193 ']' 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2717193 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2717193 ']' 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2717193 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2717193 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2717193' 00:09:54.950 killing process with pid 2717193 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2717193 00:09:54.950 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2717193 00:09:55.207 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.208 17:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.745 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:57.745 00:09:57.745 real 0m24.483s 00:09:57.745 user 1m22.404s 00:09:57.746 sys 0m8.546s 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.746 ************************************ 00:09:57.746 END TEST nvmf_fio_target 00:09:57.746 ************************************ 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.746 ************************************ 00:09:57.746 START TEST nvmf_bdevio 00:09:57.746 ************************************ 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.746 * Looking for test storage... 00:09:57.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:57.746 17:52:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:59.648 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.648 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:59.649 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:59.649 Found net devices under 0000:09:00.0: cvl_0_0 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:59.649 Found net devices under 0000:09:00.1: cvl_0_1 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:09:59.649 00:09:59.649 --- 10.0.0.2 ping statistics --- 00:09:59.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.649 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:59.649 00:09:59.649 --- 10.0.0.1 ping statistics --- 00:09:59.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.649 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2722040 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2722040 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2722040 ']' 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.649 17:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.649 [2024-07-24 17:52:45.792235] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:09:59.649 [2024-07-24 17:52:45.792311] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.649 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.649 [2024-07-24 17:52:45.859247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.908 [2024-07-24 17:52:45.982835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.908 [2024-07-24 17:52:45.982897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.908 [2024-07-24 17:52:45.982913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.908 [2024-07-24 17:52:45.982926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.908 [2024-07-24 17:52:45.982938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.908 [2024-07-24 17:52:45.983021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.908 [2024-07-24 17:52:45.983073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:59.908 [2024-07-24 17:52:45.983135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:59.908 [2024-07-24 17:52:45.983140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.908 [2024-07-24 17:52:46.144589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.908 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.168 Malloc0 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.168 [2024-07-24 17:52:46.199194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:00.168 { 00:10:00.168 "params": { 00:10:00.168 "name": "Nvme$subsystem", 00:10:00.168 "trtype": "$TEST_TRANSPORT", 00:10:00.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.168 "adrfam": "ipv4", 00:10:00.168 "trsvcid": "$NVMF_PORT", 00:10:00.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.168 "hdgst": ${hdgst:-false}, 00:10:00.168 "ddgst": ${ddgst:-false} 00:10:00.168 }, 00:10:00.168 "method": "bdev_nvme_attach_controller" 00:10:00.168 } 00:10:00.168 EOF 00:10:00.168 )") 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:00.168 17:52:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:00.168 "params": { 00:10:00.168 "name": "Nvme1", 00:10:00.168 "trtype": "tcp", 00:10:00.168 "traddr": "10.0.0.2", 00:10:00.168 "adrfam": "ipv4", 00:10:00.168 "trsvcid": "4420", 00:10:00.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.168 "hdgst": false, 00:10:00.168 "ddgst": false 00:10:00.168 }, 00:10:00.168 "method": "bdev_nvme_attach_controller" 00:10:00.168 }' 00:10:00.168 [2024-07-24 17:52:46.246830] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:10:00.168 [2024-07-24 17:52:46.246908] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722104 ] 00:10:00.168 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.168 [2024-07-24 17:52:46.306663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.168 [2024-07-24 17:52:46.419186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.168 [2024-07-24 17:52:46.419238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.168 [2024-07-24 17:52:46.419241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.426 I/O targets: 00:10:00.426 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:00.426 00:10:00.426 00:10:00.426 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.426 http://cunit.sourceforge.net/ 00:10:00.426 00:10:00.426 00:10:00.426 Suite: bdevio tests on: Nvme1n1 00:10:00.426 Test: blockdev write read block ...passed 00:10:00.684 Test: blockdev write zeroes read block ...passed 00:10:00.684 Test: blockdev write zeroes read no split ...passed 00:10:00.684 Test: blockdev write zeroes read split ...passed 00:10:00.684 Test: blockdev write zeroes read split partial ...passed 00:10:00.684 Test: blockdev reset ...[2024-07-24 17:52:46.809951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:00.684 [2024-07-24 17:52:46.810054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd3580 (9): Bad file descriptor 00:10:00.684 [2024-07-24 17:52:46.911394] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:00.684 passed 00:10:00.684 Test: blockdev write read 8 blocks ...passed 00:10:00.684 Test: blockdev write read size > 128k ...passed 00:10:00.684 Test: blockdev write read invalid size ...passed 00:10:00.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.944 Test: blockdev write read max offset ...passed 00:10:00.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:00.944 Test: blockdev writev readv 8 blocks ...passed 00:10:00.944 Test: blockdev writev readv 30 x 1block ...passed 00:10:00.944 Test: blockdev writev readv block ...passed 00:10:00.944 Test: blockdev writev readv size > 128k ...passed 00:10:00.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:00.944 Test: blockdev comparev and writev ...[2024-07-24 17:52:47.124284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.944 [2024-07-24 17:52:47.124321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.124362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.124719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.124744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.124765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.124781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.125117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.125141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.125163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.125509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.125534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.125555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:00.945 [2024-07-24 17:52:47.125572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:00.945 passed 00:10:00.945 Test: blockdev nvme passthru rw ...passed 00:10:00.945 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:52:47.207399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.945 [2024-07-24 17:52:47.207427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.207606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.945 [2024-07-24 17:52:47.207628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.207800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.945 [2024-07-24 17:52:47.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:00.945 [2024-07-24 17:52:47.208001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:00.945 [2024-07-24 17:52:47.208025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:00.945 passed 00:10:01.205 Test: blockdev nvme admin passthru ...passed 00:10:01.205 Test: blockdev copy ...passed 00:10:01.205 00:10:01.205 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.205 suites 1 1 n/a 0 0 00:10:01.205 tests 23 23 23 0 0 00:10:01.205 asserts 152 152 152 0 n/a 00:10:01.205 00:10:01.205 Elapsed time = 1.235 seconds 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.465 rmmod nvme_tcp 00:10:01.465 rmmod nvme_fabrics 00:10:01.465 rmmod nvme_keyring 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2722040 ']' 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2722040 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2722040 ']' 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2722040 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722040 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722040' 00:10:01.465 killing process with pid 2722040 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2722040 00:10:01.465 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2722040 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.724 17:52:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.266 00:10:04.266 real 0m6.410s 00:10:04.266 user 0m10.600s 00:10:04.266 sys 0m2.055s 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 ************************************ 00:10:04.266 END TEST nvmf_bdevio 00:10:04.266 ************************************ 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:04.266 00:10:04.266 real 3m54.888s 00:10:04.266 user 9m56.202s 00:10:04.266 sys 1m13.803s 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.266 17:52:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 ************************************ 00:10:04.266 END TEST nvmf_target_core 00:10:04.266 ************************************ 00:10:04.266 17:52:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.266 17:52:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.266 17:52:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.266 17:52:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 ************************************ 00:10:04.266 START TEST nvmf_target_extra 00:10:04.266 ************************************ 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.266 * Looking for test storage... 00:10:04.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 ************************************ 00:10:04.266 START TEST nvmf_example 00:10:04.266 ************************************ 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.266 * Looking for test storage... 00:10:04.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.266 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.267 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.170 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.170 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.170 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.170 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.171 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:10:06.171 00:10:06.171 --- 10.0.0.2 ping statistics --- 00:10:06.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.171 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:10:06.171 00:10:06.171 --- 10.0.0.1 ping statistics --- 00:10:06.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.171 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2724225 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2724225 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2724225 ']' 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.171 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.171 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.108 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.108 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.321 Initializing NVMe Controllers 00:10:19.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:19.321 Initialization complete. Launching workers. 00:10:19.321 ======================================================== 00:10:19.321 Latency(us) 00:10:19.321 Device Information : IOPS MiB/s Average min max 00:10:19.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13708.60 53.55 4669.82 917.80 16248.07 00:10:19.321 ======================================================== 00:10:19.321 Total : 13708.60 53.55 4669.82 917.80 16248.07 00:10:19.321 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.321 rmmod nvme_tcp 00:10:19.321 rmmod nvme_fabrics 00:10:19.321 rmmod nvme_keyring 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2724225 ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2724225 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2724225 ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2724225 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724225 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724225' 00:10:19.321 killing process with pid 2724225 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 2724225 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 2724225 00:10:19.321 nvmf threads initialize successfully 00:10:19.321 bdev subsystem init successfully 00:10:19.321 created a nvmf target service 00:10:19.321 create targets's poll groups done 00:10:19.321 all subsystems of target started 00:10:19.321 nvmf target is running 00:10:19.321 all subsystems of target stopped 00:10:19.321 destroy targets's poll groups done 00:10:19.321 destroyed the nvmf target service 00:10:19.321 bdev subsystem finish successfully 00:10:19.321 nvmf threads destroy successfully 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.321 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 00:10:19.892 real 0m15.869s 00:10:19.892 user 0m41.511s 00:10:19.892 sys 0m4.714s 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.892 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 ************************************ 00:10:19.892 END TEST nvmf_example 00:10:19.892 ************************************ 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.892 ************************************ 00:10:19.892 START TEST nvmf_filesystem 00:10:19.892 ************************************ 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:19.892 * Looking for test storage... 00:10:19.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:19.892 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:19.893 #define SPDK_CONFIG_H 00:10:19.893 #define SPDK_CONFIG_APPS 1 00:10:19.893 #define SPDK_CONFIG_ARCH native 00:10:19.893 #undef SPDK_CONFIG_ASAN 00:10:19.893 #undef SPDK_CONFIG_AVAHI 00:10:19.893 #undef SPDK_CONFIG_CET 00:10:19.893 #define SPDK_CONFIG_COVERAGE 1 00:10:19.893 #define SPDK_CONFIG_CROSS_PREFIX 00:10:19.893 #undef SPDK_CONFIG_CRYPTO 00:10:19.893 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:19.893 #undef SPDK_CONFIG_CUSTOMOCF 00:10:19.893 #undef SPDK_CONFIG_DAOS 00:10:19.893 #define SPDK_CONFIG_DAOS_DIR 00:10:19.893 #define SPDK_CONFIG_DEBUG 1 00:10:19.893 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:19.893 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:19.893 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:19.893 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:19.893 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:19.893 #undef SPDK_CONFIG_DPDK_UADK 00:10:19.893 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:19.893 #define SPDK_CONFIG_EXAMPLES 1 00:10:19.893 #undef SPDK_CONFIG_FC 00:10:19.893 #define SPDK_CONFIG_FC_PATH 00:10:19.893 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:19.893 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:19.893 #undef SPDK_CONFIG_FUSE 00:10:19.893 #undef SPDK_CONFIG_FUZZER 00:10:19.893 #define SPDK_CONFIG_FUZZER_LIB 00:10:19.893 #undef SPDK_CONFIG_GOLANG 00:10:19.893 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:19.893 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:19.893 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:19.893 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:19.893 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:19.893 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:19.893 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:19.893 #define SPDK_CONFIG_IDXD 1 00:10:19.893 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:19.893 #undef SPDK_CONFIG_IPSEC_MB 00:10:19.893 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:19.893 #define SPDK_CONFIG_ISAL 1 00:10:19.893 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:19.893 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:19.893 #define SPDK_CONFIG_LIBDIR 00:10:19.893 #undef SPDK_CONFIG_LTO 00:10:19.893 #define SPDK_CONFIG_MAX_LCORES 128 00:10:19.893 #define SPDK_CONFIG_NVME_CUSE 1 00:10:19.893 #undef SPDK_CONFIG_OCF 00:10:19.893 #define SPDK_CONFIG_OCF_PATH 00:10:19.893 #define SPDK_CONFIG_OPENSSL_PATH 00:10:19.893 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:19.893 #define SPDK_CONFIG_PGO_DIR 00:10:19.893 #undef SPDK_CONFIG_PGO_USE 00:10:19.893 #define SPDK_CONFIG_PREFIX /usr/local 00:10:19.893 #undef SPDK_CONFIG_RAID5F 00:10:19.893 #undef SPDK_CONFIG_RBD 00:10:19.893 #define SPDK_CONFIG_RDMA 1 00:10:19.893 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:19.893 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:19.893 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:19.893 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:19.893 #define SPDK_CONFIG_SHARED 1 00:10:19.893 #undef SPDK_CONFIG_SMA 00:10:19.893 #define SPDK_CONFIG_TESTS 1 00:10:19.893 #undef SPDK_CONFIG_TSAN 00:10:19.893 #define SPDK_CONFIG_UBLK 1 00:10:19.893 #define SPDK_CONFIG_UBSAN 1 00:10:19.893 #undef SPDK_CONFIG_UNIT_TESTS 00:10:19.893 #undef SPDK_CONFIG_URING 00:10:19.893 #define SPDK_CONFIG_URING_PATH 00:10:19.893 #undef SPDK_CONFIG_URING_ZNS 00:10:19.893 #undef SPDK_CONFIG_USDT 00:10:19.893 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:19.893 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:19.893 #define SPDK_CONFIG_VFIO_USER 1 00:10:19.893 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:19.893 #define SPDK_CONFIG_VHOST 1 00:10:19.893 #define SPDK_CONFIG_VIRTIO 1 00:10:19.893 #undef SPDK_CONFIG_VTUNE 00:10:19.893 #define SPDK_CONFIG_VTUNE_DIR 00:10:19.893 #define SPDK_CONFIG_WERROR 1 00:10:19.893 #define SPDK_CONFIG_WPDK_DIR 00:10:19.893 #undef SPDK_CONFIG_XNVME 00:10:19.893 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:19.893 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:19.894 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:19.895 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:19.896 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:20.157 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:20.157 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:20.157 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:20.157 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2725953 ]] 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2725953 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.VO8sVH 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VO8sVH/tests/target /tmp/spdk.VO8sVH 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=51262488576 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10732220416 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30986096640 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376530944 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22413312 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996291584 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1064960 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:20.158 * Looking for test storage... 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:20.158 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=51262488576 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12946812928 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.159 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.160 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:22.103 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:22.104 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:22.104 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:22.104 Found net devices under 0000:09:00.0: cvl_0_0 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:22.104 Found net devices under 0000:09:00.1: cvl_0_1 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.104 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:10:22.363 00:10:22.363 --- 10.0.0.2 ping statistics --- 00:10:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.363 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:22.363 00:10:22.363 --- 10.0.0.1 ping statistics --- 00:10:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.363 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.363 ************************************ 00:10:22.363 START TEST nvmf_filesystem_no_in_capsule 00:10:22.363 ************************************ 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.363 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2727669 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2727669 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2727669 ']' 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.364 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.364 [2024-07-24 17:53:08.508486] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:10:22.364 [2024-07-24 17:53:08.508563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.364 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.364 [2024-07-24 17:53:08.573127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.622 [2024-07-24 17:53:08.686421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.622 [2024-07-24 17:53:08.686475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.622 [2024-07-24 17:53:08.686489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.622 [2024-07-24 17:53:08.686500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.622 [2024-07-24 17:53:08.686510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.622 [2024-07-24 17:53:08.686599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.622 [2024-07-24 17:53:08.686663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.622 [2024-07-24 17:53:08.686728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.622 [2024-07-24 17:53:08.686731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.622 [2024-07-24 17:53:08.843335] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.622 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.881 Malloc1 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.881 [2024-07-24 17:53:09.030417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:10:22.881 { 00:10:22.881 "name": "Malloc1", 00:10:22.881 "aliases": [ 00:10:22.881 "997e8260-4e9a-46c0-97b2-9914d49688f5" 00:10:22.881 ], 00:10:22.881 "product_name": "Malloc disk", 00:10:22.881 "block_size": 512, 00:10:22.881 "num_blocks": 1048576, 00:10:22.881 "uuid": "997e8260-4e9a-46c0-97b2-9914d49688f5", 00:10:22.881 "assigned_rate_limits": { 00:10:22.881 "rw_ios_per_sec": 0, 00:10:22.881 "rw_mbytes_per_sec": 0, 00:10:22.881 "r_mbytes_per_sec": 0, 00:10:22.881 "w_mbytes_per_sec": 0 00:10:22.881 }, 00:10:22.881 "claimed": true, 00:10:22.881 "claim_type": "exclusive_write", 00:10:22.881 "zoned": false, 00:10:22.881 "supported_io_types": { 00:10:22.881 "read": true, 00:10:22.881 "write": true, 00:10:22.881 "unmap": true, 00:10:22.881 "flush": true, 00:10:22.881 "reset": true, 00:10:22.881 "nvme_admin": false, 00:10:22.881 "nvme_io": false, 00:10:22.881 "nvme_io_md": false, 00:10:22.881 "write_zeroes": true, 00:10:22.881 "zcopy": true, 00:10:22.881 "get_zone_info": false, 00:10:22.881 "zone_management": false, 00:10:22.881 "zone_append": false, 00:10:22.881 "compare": false, 00:10:22.881 "compare_and_write": false, 00:10:22.881 "abort": true, 00:10:22.881 "seek_hole": false, 00:10:22.881 "seek_data": false, 00:10:22.881 "copy": true, 00:10:22.881 "nvme_iov_md": false 00:10:22.881 }, 00:10:22.881 "memory_domains": [ 00:10:22.881 { 00:10:22.881 "dma_device_id": "system", 00:10:22.881 "dma_device_type": 1 00:10:22.881 }, 00:10:22.881 { 00:10:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.881 "dma_device_type": 2 00:10:22.881 } 00:10:22.881 ], 00:10:22.881 "driver_specific": {} 00:10:22.881 } 00:10:22.881 ]' 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:22.881 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.814 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.814 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:10:23.814 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.814 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:23.814 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:25.711 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:26.643 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.575 ************************************ 00:10:27.575 START TEST filesystem_ext4 00:10:27.575 ************************************ 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:27.575 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:27.575 mke2fs 1.46.5 (30-Dec-2021) 00:10:27.575 Discarding device blocks: 0/522240 done 00:10:27.833 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:27.833 Filesystem UUID: 66759d36-bb5d-411b-87c0-2699ba1ce5a3 00:10:27.833 Superblock backups stored on blocks: 00:10:27.833 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:27.833 00:10:27.833 Allocating group tables: 0/64 done 00:10:27.833 Writing inode tables: 0/64 done 00:10:28.397 Creating journal (8192 blocks): done 00:10:29.220 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:10:29.220 00:10:29.220 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:29.220 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2727669 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.152 00:10:30.152 real 0m2.470s 00:10:30.152 user 0m0.008s 00:10:30.152 sys 0m0.060s 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:30.152 ************************************ 00:10:30.152 END TEST filesystem_ext4 00:10:30.152 ************************************ 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.152 ************************************ 00:10:30.152 START TEST filesystem_btrfs 00:10:30.152 ************************************ 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:30.152 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:30.410 btrfs-progs v6.6.2 00:10:30.410 See https://btrfs.readthedocs.io for more information. 00:10:30.410 00:10:30.410 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:30.410 NOTE: several default settings have changed in version 5.15, please make sure 00:10:30.410 this does not affect your deployments: 00:10:30.410 - DUP for metadata (-m dup) 00:10:30.410 - enabled no-holes (-O no-holes) 00:10:30.410 - enabled free-space-tree (-R free-space-tree) 00:10:30.410 00:10:30.410 Label: (null) 00:10:30.410 UUID: 1365b2d4-bd13-4957-a908-91c38593d3fa 00:10:30.410 Node size: 16384 00:10:30.410 Sector size: 4096 00:10:30.410 Filesystem size: 510.00MiB 00:10:30.410 Block group profiles: 00:10:30.410 Data: single 8.00MiB 00:10:30.410 Metadata: DUP 32.00MiB 00:10:30.410 System: DUP 8.00MiB 00:10:30.410 SSD detected: yes 00:10:30.410 Zoned device: no 00:10:30.410 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:30.410 Runtime features: free-space-tree 00:10:30.410 Checksum: crc32c 00:10:30.410 Number of devices: 1 00:10:30.410 Devices: 00:10:30.410 ID SIZE PATH 00:10:30.410 1 510.00MiB /dev/nvme0n1p1 00:10:30.410 00:10:30.410 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:30.410 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2727669 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.340 00:10:31.340 real 0m1.292s 00:10:31.340 user 0m0.022s 00:10:31.340 sys 0m0.111s 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 ************************************ 00:10:31.340 END TEST filesystem_btrfs 00:10:31.340 ************************************ 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.340 ************************************ 00:10:31.340 START TEST filesystem_xfs 00:10:31.340 ************************************ 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:31.340 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:31.341 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:31.599 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:31.599 = sectsz=512 attr=2, projid32bit=1 00:10:31.599 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:31.599 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:31.599 data = bsize=4096 blocks=130560, imaxpct=25 00:10:31.599 = sunit=0 swidth=0 blks 00:10:31.599 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:31.599 log =internal log bsize=4096 blocks=16384, version=2 00:10:31.599 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:31.599 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:32.164 Discarding blocks...Done. 00:10:32.164 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:32.164 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2727669 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.060 00:10:34.060 real 0m2.665s 00:10:34.060 user 0m0.011s 00:10:34.060 sys 0m0.063s 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.060 ************************************ 00:10:34.060 END TEST filesystem_xfs 00:10:34.060 ************************************ 00:10:34.060 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:34.317 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:34.317 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2727669 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2727669 ']' 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2727669 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2727669 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2727669' 00:10:34.575 killing process with pid 2727669 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2727669 00:10:34.575 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2727669 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:35.142 00:10:35.142 real 0m12.720s 00:10:35.142 user 0m48.628s 00:10:35.142 sys 0m1.901s 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.142 ************************************ 00:10:35.142 END TEST nvmf_filesystem_no_in_capsule 00:10:35.142 ************************************ 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.142 ************************************ 00:10:35.142 START TEST nvmf_filesystem_in_capsule 00:10:35.142 ************************************ 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2729356 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2729356 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2729356 ']' 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.142 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.142 [2024-07-24 17:53:21.275776] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:10:35.142 [2024-07-24 17:53:21.275862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.142 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.142 [2024-07-24 17:53:21.352380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.400 [2024-07-24 17:53:21.479029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.400 [2024-07-24 17:53:21.479089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.400 [2024-07-24 17:53:21.479113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.400 [2024-07-24 17:53:21.479128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.400 [2024-07-24 17:53:21.479140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.400 [2024-07-24 17:53:21.479204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.400 [2024-07-24 17:53:21.479258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.400 [2024-07-24 17:53:21.479282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.400 [2024-07-24 17:53:21.479288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.400 [2024-07-24 17:53:21.622581] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.400 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 Malloc1 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 [2024-07-24 17:53:21.798293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:10:35.658 { 00:10:35.658 "name": "Malloc1", 00:10:35.658 "aliases": [ 00:10:35.658 "79bbee58-a13d-47bd-93ed-53aadc28defb" 00:10:35.658 ], 00:10:35.658 "product_name": "Malloc disk", 00:10:35.658 "block_size": 512, 00:10:35.658 "num_blocks": 1048576, 00:10:35.658 "uuid": "79bbee58-a13d-47bd-93ed-53aadc28defb", 00:10:35.658 "assigned_rate_limits": { 00:10:35.658 "rw_ios_per_sec": 0, 00:10:35.658 "rw_mbytes_per_sec": 0, 00:10:35.658 "r_mbytes_per_sec": 0, 00:10:35.658 "w_mbytes_per_sec": 0 00:10:35.658 }, 00:10:35.658 "claimed": true, 00:10:35.658 "claim_type": "exclusive_write", 00:10:35.658 "zoned": false, 00:10:35.658 "supported_io_types": { 00:10:35.658 "read": true, 00:10:35.658 "write": true, 00:10:35.658 "unmap": true, 00:10:35.658 "flush": true, 00:10:35.658 "reset": true, 00:10:35.658 "nvme_admin": false, 00:10:35.658 "nvme_io": false, 00:10:35.658 "nvme_io_md": false, 00:10:35.658 "write_zeroes": true, 00:10:35.658 "zcopy": true, 00:10:35.658 "get_zone_info": false, 00:10:35.658 "zone_management": false, 00:10:35.658 "zone_append": false, 00:10:35.658 "compare": false, 00:10:35.658 "compare_and_write": false, 00:10:35.658 "abort": true, 00:10:35.658 "seek_hole": false, 00:10:35.658 "seek_data": false, 00:10:35.658 "copy": true, 00:10:35.658 "nvme_iov_md": false 00:10:35.658 }, 00:10:35.658 "memory_domains": [ 00:10:35.658 { 00:10:35.658 "dma_device_id": "system", 00:10:35.658 "dma_device_type": 1 00:10:35.658 }, 00:10:35.658 { 00:10:35.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.658 "dma_device_type": 2 00:10:35.658 } 00:10:35.658 ], 00:10:35.658 "driver_specific": {} 00:10:35.658 } 00:10:35.658 ]' 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:35.658 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.591 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.591 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:10:36.591 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.591 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:36.591 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:38.489 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:38.746 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:39.004 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.937 ************************************ 00:10:39.937 START TEST filesystem_in_capsule_ext4 00:10:39.937 ************************************ 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:39.937 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:39.937 mke2fs 1.46.5 (30-Dec-2021) 00:10:39.937 Discarding device blocks: 0/522240 done 00:10:40.194 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:40.194 Filesystem UUID: e33fbfd6-28ed-4417-ab91-0065466a687c 00:10:40.194 Superblock backups stored on blocks: 00:10:40.194 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:40.194 00:10:40.194 Allocating group tables: 0/64 done 00:10:40.194 Writing inode tables: 0/64 done 00:10:40.758 Creating journal (8192 blocks): done 00:10:40.758 Writing superblocks and filesystem accounting information: 0/64 done 00:10:40.758 00:10:40.758 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:40.758 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2729356 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.733 00:10:41.733 real 0m1.662s 00:10:41.733 user 0m0.026s 00:10:41.733 sys 0m0.051s 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:41.733 ************************************ 00:10:41.733 END TEST filesystem_in_capsule_ext4 00:10:41.733 ************************************ 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.733 ************************************ 00:10:41.733 START TEST filesystem_in_capsule_btrfs 00:10:41.733 ************************************ 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:41.733 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.991 btrfs-progs v6.6.2 00:10:41.991 See https://btrfs.readthedocs.io for more information. 00:10:41.991 00:10:41.991 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.991 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.991 this does not affect your deployments: 00:10:41.991 - DUP for metadata (-m dup) 00:10:41.991 - enabled no-holes (-O no-holes) 00:10:41.991 - enabled free-space-tree (-R free-space-tree) 00:10:41.991 00:10:41.992 Label: (null) 00:10:41.992 UUID: 7b4573c5-ecd7-4c9c-be21-c263921469ea 00:10:41.992 Node size: 16384 00:10:41.992 Sector size: 4096 00:10:41.992 Filesystem size: 510.00MiB 00:10:41.992 Block group profiles: 00:10:41.992 Data: single 8.00MiB 00:10:41.992 Metadata: DUP 32.00MiB 00:10:41.992 System: DUP 8.00MiB 00:10:41.992 SSD detected: yes 00:10:41.992 Zoned device: no 00:10:41.992 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.992 Runtime features: free-space-tree 00:10:41.992 Checksum: crc32c 00:10:41.992 Number of devices: 1 00:10:41.992 Devices: 00:10:41.992 ID SIZE PATH 00:10:41.992 1 510.00MiB /dev/nvme0n1p1 00:10:41.992 00:10:41.992 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:41.992 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2729356 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.557 00:10:42.557 real 0m1.034s 00:10:42.557 user 0m0.027s 00:10:42.557 sys 0m0.112s 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.557 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.557 ************************************ 00:10:42.557 END TEST filesystem_in_capsule_btrfs 00:10:42.557 ************************************ 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.815 ************************************ 00:10:42.815 START TEST filesystem_in_capsule_xfs 00:10:42.815 ************************************ 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:42.815 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.815 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.815 = sectsz=512 attr=2, projid32bit=1 00:10:42.815 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.815 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.815 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.815 = sunit=0 swidth=0 blks 00:10:42.815 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.815 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.815 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.815 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.746 Discarding blocks...Done. 00:10:43.746 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:43.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2729356 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.642 00:10:45.642 real 0m2.734s 00:10:45.642 user 0m0.023s 00:10:45.642 sys 0m0.054s 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.642 ************************************ 00:10:45.642 END TEST filesystem_in_capsule_xfs 00:10:45.642 ************************************ 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.642 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:45.900 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2729356 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2729356 ']' 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2729356 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2729356 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2729356' 00:10:45.900 killing process with pid 2729356 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2729356 00:10:45.900 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2729356 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.466 00:10:46.466 real 0m11.303s 00:10:46.466 user 0m43.198s 00:10:46.466 sys 0m1.709s 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.466 ************************************ 00:10:46.466 END TEST nvmf_filesystem_in_capsule 00:10:46.466 ************************************ 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.466 rmmod nvme_tcp 00:10:46.466 rmmod nvme_fabrics 00:10:46.466 rmmod nvme_keyring 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.466 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:49.001 00:10:49.001 real 0m28.612s 00:10:49.001 user 1m32.787s 00:10:49.001 sys 0m5.232s 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.001 ************************************ 00:10:49.001 END TEST nvmf_filesystem 00:10:49.001 ************************************ 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:49.001 ************************************ 00:10:49.001 START TEST nvmf_target_discovery 00:10:49.001 ************************************ 00:10:49.001 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:49.001 * Looking for test storage... 00:10:49.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.002 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.905 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.905 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.905 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.905 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.905 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:10:50.906 00:10:50.906 --- 10.0.0.2 ping statistics --- 00:10:50.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.906 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:10:50.906 00:10:50.906 --- 10.0.0.1 ping statistics --- 00:10:50.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.906 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2732716 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2732716 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2732716 ']' 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.906 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.906 [2024-07-24 17:53:37.014258] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:10:50.906 [2024-07-24 17:53:37.014351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.906 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.906 [2024-07-24 17:53:37.084792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.164 [2024-07-24 17:53:37.208297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.164 [2024-07-24 17:53:37.208355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.164 [2024-07-24 17:53:37.208382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.164 [2024-07-24 17:53:37.208394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.164 [2024-07-24 17:53:37.208407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.164 [2024-07-24 17:53:37.208498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.164 [2024-07-24 17:53:37.208571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.164 [2024-07-24 17:53:37.208632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.164 [2024-07-24 17:53:37.208635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.728 [2024-07-24 17:53:37.968789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:51.728 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.729 Null1 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.729 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 [2024-07-24 17:53:38.009064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 Null2 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 Null3 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 Null4 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.987 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:51.988 00:10:51.988 Discovery Log Number of Records 6, Generation counter 6 00:10:51.988 =====Discovery Log Entry 0====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: current discovery subsystem 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4420 00:10:51.988 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: explicit discovery connections, duplicate discovery information 00:10:51.988 sectype: none 00:10:51.988 =====Discovery Log Entry 1====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: nvme subsystem 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4420 00:10:51.988 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: none 00:10:51.988 sectype: none 00:10:51.988 =====Discovery Log Entry 2====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: nvme subsystem 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4420 00:10:51.988 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: none 00:10:51.988 sectype: none 00:10:51.988 =====Discovery Log Entry 3====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: nvme subsystem 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4420 00:10:51.988 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: none 00:10:51.988 sectype: none 00:10:51.988 =====Discovery Log Entry 4====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: nvme subsystem 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4420 00:10:51.988 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: none 00:10:51.988 sectype: none 00:10:51.988 =====Discovery Log Entry 5====== 00:10:51.988 trtype: tcp 00:10:51.988 adrfam: ipv4 00:10:51.988 subtype: discovery subsystem referral 00:10:51.988 treq: not required 00:10:51.988 portid: 0 00:10:51.988 trsvcid: 4430 00:10:51.988 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.988 traddr: 10.0.0.2 00:10:51.988 eflags: none 00:10:51.988 sectype: none 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:51.988 Perform nvmf subsystem discovery via RPC 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.988 [ 00:10:51.988 { 00:10:51.988 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:51.988 "subtype": "Discovery", 00:10:51.988 "listen_addresses": [ 00:10:51.988 { 00:10:51.988 "trtype": "TCP", 00:10:51.988 "adrfam": "IPv4", 00:10:51.988 "traddr": "10.0.0.2", 00:10:51.988 "trsvcid": "4420" 00:10:51.988 } 00:10:51.988 ], 00:10:51.988 "allow_any_host": true, 00:10:51.988 "hosts": [] 00:10:51.988 }, 00:10:51.988 { 00:10:51.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.988 "subtype": "NVMe", 00:10:51.988 "listen_addresses": [ 00:10:51.988 { 00:10:51.988 "trtype": "TCP", 00:10:51.988 "adrfam": "IPv4", 00:10:51.988 "traddr": "10.0.0.2", 00:10:51.988 "trsvcid": "4420" 00:10:51.988 } 00:10:51.988 ], 00:10:51.988 "allow_any_host": true, 00:10:51.988 "hosts": [], 00:10:51.988 "serial_number": "SPDK00000000000001", 00:10:51.988 "model_number": "SPDK bdev Controller", 00:10:51.988 "max_namespaces": 32, 00:10:51.988 "min_cntlid": 1, 00:10:51.988 "max_cntlid": 65519, 00:10:51.988 "namespaces": [ 00:10:51.988 { 00:10:51.988 "nsid": 1, 00:10:51.988 "bdev_name": "Null1", 00:10:51.988 "name": "Null1", 00:10:51.988 "nguid": "056C47D00E9241048CF22AB3A0C642A9", 00:10:51.988 "uuid": "056c47d0-0e92-4104-8cf2-2ab3a0c642a9" 00:10:51.988 } 00:10:51.988 ] 00:10:51.988 }, 00:10:51.988 { 00:10:51.988 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.988 "subtype": "NVMe", 00:10:51.988 "listen_addresses": [ 00:10:51.988 { 00:10:51.988 "trtype": "TCP", 00:10:51.988 "adrfam": "IPv4", 00:10:51.988 "traddr": "10.0.0.2", 00:10:51.988 "trsvcid": "4420" 00:10:51.988 } 00:10:51.988 ], 00:10:51.988 "allow_any_host": true, 00:10:51.988 "hosts": [], 00:10:51.988 "serial_number": "SPDK00000000000002", 00:10:51.988 "model_number": "SPDK bdev Controller", 00:10:51.988 "max_namespaces": 32, 00:10:51.988 "min_cntlid": 1, 00:10:51.988 "max_cntlid": 65519, 00:10:51.988 "namespaces": [ 00:10:51.988 { 00:10:51.988 "nsid": 1, 00:10:51.988 "bdev_name": "Null2", 00:10:51.988 "name": "Null2", 00:10:51.988 "nguid": "C52B8D6B0C4941FEAD4EB40209725F5A", 00:10:51.988 "uuid": "c52b8d6b-0c49-41fe-ad4e-b40209725f5a" 00:10:51.988 } 00:10:51.988 ] 00:10:51.988 }, 00:10:51.988 { 00:10:51.988 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:51.988 "subtype": "NVMe", 00:10:51.988 "listen_addresses": [ 00:10:51.988 { 00:10:51.988 "trtype": "TCP", 00:10:51.988 "adrfam": "IPv4", 00:10:51.988 "traddr": "10.0.0.2", 00:10:51.988 "trsvcid": "4420" 00:10:51.988 } 00:10:51.988 ], 00:10:51.988 "allow_any_host": true, 00:10:51.988 "hosts": [], 00:10:51.988 "serial_number": "SPDK00000000000003", 00:10:51.988 "model_number": "SPDK bdev Controller", 00:10:51.988 "max_namespaces": 32, 00:10:51.988 "min_cntlid": 1, 00:10:51.988 "max_cntlid": 65519, 00:10:51.988 "namespaces": [ 00:10:51.988 { 00:10:51.988 "nsid": 1, 00:10:51.988 "bdev_name": "Null3", 00:10:51.988 "name": "Null3", 00:10:51.988 "nguid": "9C2E97FB567E4C9785236CB292D0D39C", 00:10:51.988 "uuid": "9c2e97fb-567e-4c97-8523-6cb292d0d39c" 00:10:51.988 } 00:10:51.988 ] 00:10:51.988 }, 00:10:51.988 { 00:10:51.988 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:51.988 "subtype": "NVMe", 00:10:51.988 "listen_addresses": [ 00:10:51.988 { 00:10:51.988 "trtype": "TCP", 00:10:51.988 "adrfam": "IPv4", 00:10:51.988 "traddr": "10.0.0.2", 00:10:51.988 "trsvcid": "4420" 00:10:51.988 } 00:10:51.988 ], 00:10:51.988 "allow_any_host": true, 00:10:51.988 "hosts": [], 00:10:51.988 "serial_number": "SPDK00000000000004", 00:10:51.988 "model_number": "SPDK bdev Controller", 00:10:51.988 "max_namespaces": 32, 00:10:51.988 "min_cntlid": 1, 00:10:51.988 "max_cntlid": 65519, 00:10:51.988 "namespaces": [ 00:10:51.988 { 00:10:51.988 "nsid": 1, 00:10:51.988 "bdev_name": "Null4", 00:10:51.988 "name": "Null4", 00:10:51.988 "nguid": "31B1F5D3904D471EA48BE33794CAB7AC", 00:10:51.988 "uuid": "31b1f5d3-904d-471e-a48b-e33794cab7ac" 00:10:51.988 } 00:10:51.988 ] 00:10:51.988 } 00:10:51.988 ] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.988 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.247 rmmod nvme_tcp 00:10:52.247 rmmod nvme_fabrics 00:10:52.247 rmmod nvme_keyring 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2732716 ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2732716 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2732716 ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2732716 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732716 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732716' 00:10:52.247 killing process with pid 2732716 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2732716 00:10:52.247 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2732716 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.506 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.044 00:10:55.044 real 0m6.052s 00:10:55.044 user 0m6.840s 00:10:55.044 sys 0m1.895s 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.044 ************************************ 00:10:55.044 END TEST nvmf_target_discovery 00:10:55.044 ************************************ 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.044 ************************************ 00:10:55.044 START TEST nvmf_referrals 00:10:55.044 ************************************ 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:55.044 * Looking for test storage... 00:10:55.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.044 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.045 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:56.946 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:56.946 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.946 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:56.947 Found net devices under 0000:09:00.0: cvl_0_0 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:56.947 Found net devices under 0000:09:00.1: cvl_0_1 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.947 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:10:56.947 00:10:56.947 --- 10.0.0.2 ping statistics --- 00:10:56.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.947 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:56.947 00:10:56.947 --- 10.0.0.1 ping statistics --- 00:10:56.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.947 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2734913 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2734913 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2734913 ']' 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.947 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.947 [2024-07-24 17:53:43.203722] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:10:56.947 [2024-07-24 17:53:43.203809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.205 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.205 [2024-07-24 17:53:43.283756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.205 [2024-07-24 17:53:43.407653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.205 [2024-07-24 17:53:43.407703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.205 [2024-07-24 17:53:43.407726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.205 [2024-07-24 17:53:43.407740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.206 [2024-07-24 17:53:43.407751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.206 [2024-07-24 17:53:43.407848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.206 [2024-07-24 17:53:43.407914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.206 [2024-07-24 17:53:43.407938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.206 [2024-07-24 17:53:43.407942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 [2024-07-24 17:53:43.569920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 [2024-07-24 17:53:43.582180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.463 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.720 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.978 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.235 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:58.492 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.750 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.007 rmmod nvme_tcp 00:10:59.007 rmmod nvme_fabrics 00:10:59.007 rmmod nvme_keyring 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2734913 ']' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2734913 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2734913 ']' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2734913 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.007 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2734913 00:10:59.265 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.265 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.265 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2734913' 00:10:59.265 killing process with pid 2734913 00:10:59.265 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2734913 00:10:59.265 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2734913 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.524 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.429 00:11:01.429 real 0m6.802s 00:11:01.429 user 0m9.889s 00:11:01.429 sys 0m2.226s 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 ************************************ 00:11:01.429 END TEST nvmf_referrals 00:11:01.429 ************************************ 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 ************************************ 00:11:01.429 START TEST nvmf_connect_disconnect 00:11:01.429 ************************************ 00:11:01.429 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:01.688 * Looking for test storage... 00:11:01.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.688 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:03.592 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:03.592 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:03.592 Found net devices under 0000:09:00.0: cvl_0_0 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:03.592 Found net devices under 0000:09:00.1: cvl_0_1 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.592 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.593 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.852 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:11:03.852 00:11:03.852 --- 10.0.0.2 ping statistics --- 00:11:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.852 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:11:03.853 00:11:03.853 --- 10.0.0.1 ping statistics --- 00:11:03.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.853 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2737204 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2737204 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2737204 ']' 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.853 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.853 [2024-07-24 17:53:50.046039] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:11:03.853 [2024-07-24 17:53:50.046192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.853 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.149 [2024-07-24 17:53:50.131847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.150 [2024-07-24 17:53:50.245330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.150 [2024-07-24 17:53:50.245381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.150 [2024-07-24 17:53:50.245395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.150 [2024-07-24 17:53:50.245407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.150 [2024-07-24 17:53:50.245417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.150 [2024-07-24 17:53:50.245476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.150 [2024-07-24 17:53:50.245537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.150 [2024-07-24 17:53:50.245561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.150 [2024-07-24 17:53:50.245566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 [2024-07-24 17:53:51.076803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 [2024-07-24 17:53:51.134171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:05.082 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:08.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.222 17:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.222 rmmod nvme_tcp 00:11:19.222 rmmod nvme_fabrics 00:11:19.222 rmmod nvme_keyring 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2737204 ']' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2737204 ']' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2737204' 00:11:19.222 killing process with pid 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2737204 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.222 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.753 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.753 00:11:21.754 real 0m19.785s 00:11:21.754 user 1m0.177s 00:11:21.754 sys 0m3.408s 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:21.754 ************************************ 00:11:21.754 END TEST nvmf_connect_disconnect 00:11:21.754 ************************************ 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.754 ************************************ 00:11:21.754 START TEST nvmf_multitarget 00:11:21.754 ************************************ 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:21.754 * Looking for test storage... 00:11:21.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.754 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.656 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:23.657 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:23.657 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:23.657 Found net devices under 0000:09:00.0: cvl_0_0 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:23.657 Found net devices under 0000:09:00.1: cvl_0_1 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:11:23.657 00:11:23.657 --- 10.0.0.2 ping statistics --- 00:11:23.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.657 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:11:23.657 00:11:23.657 --- 10.0.0.1 ping statistics --- 00:11:23.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.657 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2741590 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2741590 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2741590 ']' 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.657 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.657 [2024-07-24 17:54:09.684348] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:11:23.658 [2024-07-24 17:54:09.684469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.658 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.658 [2024-07-24 17:54:09.748948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.658 [2024-07-24 17:54:09.858766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.658 [2024-07-24 17:54:09.858819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.658 [2024-07-24 17:54:09.858845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.658 [2024-07-24 17:54:09.858859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.658 [2024-07-24 17:54:09.858871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.658 [2024-07-24 17:54:09.858950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.658 [2024-07-24 17:54:09.859026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.658 [2024-07-24 17:54:09.859123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.658 [2024-07-24 17:54:09.859142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.916 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.916 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:23.916 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:23.916 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:23.916 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:23.916 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:24.174 "nvmf_tgt_1" 00:11:24.174 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:24.174 "nvmf_tgt_2" 00:11:24.174 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.174 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:24.431 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:24.431 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:24.431 true 00:11:24.431 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:24.689 true 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.689 rmmod nvme_tcp 00:11:24.689 rmmod nvme_fabrics 00:11:24.689 rmmod nvme_keyring 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2741590 ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2741590 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2741590 ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2741590 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741590 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741590' 00:11:24.689 killing process with pid 2741590 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2741590 00:11:24.689 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2741590 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.948 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.482 00:11:27.482 real 0m5.762s 00:11:27.482 user 0m6.687s 00:11:27.482 sys 0m1.894s 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:27.482 ************************************ 00:11:27.482 END TEST nvmf_multitarget 00:11:27.482 ************************************ 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.482 ************************************ 00:11:27.482 START TEST nvmf_rpc 00:11:27.482 ************************************ 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:27.482 * Looking for test storage... 00:11:27.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.482 17:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:29.383 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.383 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:29.384 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:29.384 Found net devices under 0000:09:00.0: cvl_0_0 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:29.384 Found net devices under 0000:09:00.1: cvl_0_1 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:29.384 00:11:29.384 --- 10.0.0.2 ping statistics --- 00:11:29.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.384 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:29.384 00:11:29.384 --- 10.0.0.1 ping statistics --- 00:11:29.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.384 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2743685 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2743685 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2743685 ']' 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.384 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 [2024-07-24 17:54:15.618190] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:11:29.384 [2024-07-24 17:54:15.618280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.643 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.643 [2024-07-24 17:54:15.697673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.643 [2024-07-24 17:54:15.824491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.643 [2024-07-24 17:54:15.824552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.643 [2024-07-24 17:54:15.824569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.643 [2024-07-24 17:54:15.824582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.643 [2024-07-24 17:54:15.824594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.643 [2024-07-24 17:54:15.824889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.643 [2024-07-24 17:54:15.824942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.643 [2024-07-24 17:54:15.824968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.643 [2024-07-24 17:54:15.824971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:29.901 "tick_rate": 2700000000, 00:11:29.901 "poll_groups": [ 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_000", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_001", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_002", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_003", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [] 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 }' 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:29.901 17:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.901 [2024-07-24 17:54:16.079885] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:29.901 "tick_rate": 2700000000, 00:11:29.901 "poll_groups": [ 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_000", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [ 00:11:29.901 { 00:11:29.901 "trtype": "TCP" 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_001", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [ 00:11:29.901 { 00:11:29.901 "trtype": "TCP" 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_002", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [ 00:11:29.901 { 00:11:29.901 "trtype": "TCP" 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 }, 00:11:29.901 { 00:11:29.901 "name": "nvmf_tgt_poll_group_003", 00:11:29.901 "admin_qpairs": 0, 00:11:29.901 "io_qpairs": 0, 00:11:29.901 "current_admin_qpairs": 0, 00:11:29.901 "current_io_qpairs": 0, 00:11:29.901 "pending_bdev_io": 0, 00:11:29.901 "completed_nvme_io": 0, 00:11:29.901 "transports": [ 00:11:29.901 { 00:11:29.901 "trtype": "TCP" 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 } 00:11:29.901 ] 00:11:29.901 }' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:29.901 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:29.902 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:29.902 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:29.902 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 Malloc1 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 [2024-07-24 17:54:16.242060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:30.160 [2024-07-24 17:54:16.264599] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:30.160 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:30.160 could not add new controller: failed to write to nvme-fabrics device 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.160 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.732 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.732 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:30.732 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.732 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:30.732 17:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:33.280 17:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.280 [2024-07-24 17:54:19.114409] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:33.280 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:33.280 could not add new controller: failed to write to nvme-fabrics device 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.280 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.537 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.537 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:33.537 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.537 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:33.537 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.063 [2024-07-24 17:54:21.848357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.063 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.321 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.321 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:36.321 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.321 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:36.321 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:38.216 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 [2024-07-24 17:54:24.613698] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.474 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.039 17:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.039 17:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:39.039 17:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.039 17:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:39.039 17:54:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 [2024-07-24 17:54:27.339901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.565 17:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.822 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.822 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:41.822 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.822 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:41.823 17:54:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.345 [2024-07-24 17:54:30.166652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.345 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.346 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.346 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.346 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.603 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.603 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:44.603 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.603 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:44.603 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 [2024-07-24 17:54:32.935789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.129 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.386 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.386 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:47.386 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.386 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:47.386 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 [2024-07-24 17:54:35.738792] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.914 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.914 [2024-07-24 17:54:35.786858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 [2024-07-24 17:54:35.835012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 [2024-07-24 17:54:35.883208] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 [2024-07-24 17:54:35.931364] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.915 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:49.915 "tick_rate": 2700000000, 00:11:49.915 "poll_groups": [ 00:11:49.915 { 00:11:49.915 "name": "nvmf_tgt_poll_group_000", 00:11:49.915 "admin_qpairs": 2, 00:11:49.915 "io_qpairs": 84, 00:11:49.915 "current_admin_qpairs": 0, 00:11:49.915 "current_io_qpairs": 0, 00:11:49.915 "pending_bdev_io": 0, 00:11:49.915 "completed_nvme_io": 213, 00:11:49.915 "transports": [ 00:11:49.915 { 00:11:49.915 "trtype": "TCP" 00:11:49.915 } 00:11:49.915 ] 00:11:49.915 }, 00:11:49.915 { 00:11:49.915 "name": "nvmf_tgt_poll_group_001", 00:11:49.915 "admin_qpairs": 2, 00:11:49.916 "io_qpairs": 84, 00:11:49.916 "current_admin_qpairs": 0, 00:11:49.916 "current_io_qpairs": 0, 00:11:49.916 "pending_bdev_io": 0, 00:11:49.916 "completed_nvme_io": 136, 00:11:49.916 "transports": [ 00:11:49.916 { 00:11:49.916 "trtype": "TCP" 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 }, 00:11:49.916 { 00:11:49.916 "name": "nvmf_tgt_poll_group_002", 00:11:49.916 "admin_qpairs": 1, 00:11:49.916 "io_qpairs": 84, 00:11:49.916 "current_admin_qpairs": 0, 00:11:49.916 "current_io_qpairs": 0, 00:11:49.916 "pending_bdev_io": 0, 00:11:49.916 "completed_nvme_io": 194, 00:11:49.916 "transports": [ 00:11:49.916 { 00:11:49.916 "trtype": "TCP" 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 }, 00:11:49.916 { 00:11:49.916 "name": "nvmf_tgt_poll_group_003", 00:11:49.916 "admin_qpairs": 2, 00:11:49.916 "io_qpairs": 84, 00:11:49.916 "current_admin_qpairs": 0, 00:11:49.916 "current_io_qpairs": 0, 00:11:49.916 "pending_bdev_io": 0, 00:11:49.916 "completed_nvme_io": 143, 00:11:49.916 "transports": [ 00:11:49.916 { 00:11:49.916 "trtype": "TCP" 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 } 00:11:49.916 ] 00:11:49.916 }' 00:11:49.916 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:49.916 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:49.916 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:49.916 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.916 rmmod nvme_tcp 00:11:49.916 rmmod nvme_fabrics 00:11:49.916 rmmod nvme_keyring 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2743685 ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2743685 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2743685 ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2743685 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2743685 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2743685' 00:11:49.916 killing process with pid 2743685 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2743685 00:11:49.916 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2743685 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.175 17:54:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.711 00:11:52.711 real 0m25.185s 00:11:52.711 user 1m21.533s 00:11:52.711 sys 0m4.157s 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 ************************************ 00:11:52.711 END TEST nvmf_rpc 00:11:52.711 ************************************ 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 ************************************ 00:11:52.711 START TEST nvmf_invalid 00:11:52.711 ************************************ 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.711 * Looking for test storage... 00:11:52.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.711 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.712 17:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:54.615 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:54.615 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.615 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:54.616 Found net devices under 0000:09:00.0: cvl_0_0 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:54.616 Found net devices under 0000:09:00.1: cvl_0_1 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:11:54.616 00:11:54.616 --- 10.0.0.2 ping statistics --- 00:11:54.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.616 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:54.616 00:11:54.616 --- 10.0.0.1 ping statistics --- 00:11:54.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.616 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2748169 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2748169 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2748169 ']' 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.616 17:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.616 [2024-07-24 17:54:40.841183] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:11:54.616 [2024-07-24 17:54:40.841293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.616 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.874 [2024-07-24 17:54:40.912777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.874 [2024-07-24 17:54:41.037032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.874 [2024-07-24 17:54:41.037087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.874 [2024-07-24 17:54:41.037121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.874 [2024-07-24 17:54:41.037136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.874 [2024-07-24 17:54:41.037147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.874 [2024-07-24 17:54:41.037221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.874 [2024-07-24 17:54:41.037257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.874 [2024-07-24 17:54:41.037309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.874 [2024-07-24 17:54:41.037312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.809 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:55.809 17:54:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3780 00:11:55.809 [2024-07-24 17:54:42.045701] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:55.809 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:55.809 { 00:11:55.809 "nqn": "nqn.2016-06.io.spdk:cnode3780", 00:11:55.809 "tgt_name": "foobar", 00:11:55.809 "method": "nvmf_create_subsystem", 00:11:55.809 "req_id": 1 00:11:55.809 } 00:11:55.809 Got JSON-RPC error response 00:11:55.809 response: 00:11:55.809 { 00:11:55.809 "code": -32603, 00:11:55.809 "message": "Unable to find target foobar" 00:11:55.809 }' 00:11:55.809 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:55.809 { 00:11:55.809 "nqn": "nqn.2016-06.io.spdk:cnode3780", 00:11:55.809 "tgt_name": "foobar", 00:11:55.809 "method": "nvmf_create_subsystem", 00:11:55.809 "req_id": 1 00:11:55.809 } 00:11:55.809 Got JSON-RPC error response 00:11:55.809 response: 00:11:55.809 { 00:11:55.809 "code": -32603, 00:11:55.809 "message": "Unable to find target foobar" 00:11:55.809 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:55.809 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:55.809 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5277 00:11:56.067 [2024-07-24 17:54:42.290509] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5277: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:56.067 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:56.067 { 00:11:56.067 "nqn": "nqn.2016-06.io.spdk:cnode5277", 00:11:56.067 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:56.067 "method": "nvmf_create_subsystem", 00:11:56.067 "req_id": 1 00:11:56.067 } 00:11:56.067 Got JSON-RPC error response 00:11:56.067 response: 00:11:56.067 { 00:11:56.067 "code": -32602, 00:11:56.067 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:56.067 }' 00:11:56.067 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:56.067 { 00:11:56.067 "nqn": "nqn.2016-06.io.spdk:cnode5277", 00:11:56.067 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:56.067 "method": "nvmf_create_subsystem", 00:11:56.067 "req_id": 1 00:11:56.067 } 00:11:56.067 Got JSON-RPC error response 00:11:56.067 response: 00:11:56.067 { 00:11:56.067 "code": -32602, 00:11:56.067 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:56.067 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.067 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:56.067 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9079 00:11:56.355 [2024-07-24 17:54:42.535292] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9079: invalid model number 'SPDK_Controller' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:56.355 { 00:11:56.355 "nqn": "nqn.2016-06.io.spdk:cnode9079", 00:11:56.355 "model_number": "SPDK_Controller\u001f", 00:11:56.355 "method": "nvmf_create_subsystem", 00:11:56.355 "req_id": 1 00:11:56.355 } 00:11:56.355 Got JSON-RPC error response 00:11:56.355 response: 00:11:56.355 { 00:11:56.355 "code": -32602, 00:11:56.355 "message": "Invalid MN SPDK_Controller\u001f" 00:11:56.355 }' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:56.355 { 00:11:56.355 "nqn": "nqn.2016-06.io.spdk:cnode9079", 00:11:56.355 "model_number": "SPDK_Controller\u001f", 00:11:56.355 "method": "nvmf_create_subsystem", 00:11:56.355 "req_id": 1 00:11:56.355 } 00:11:56.355 Got JSON-RPC error response 00:11:56.355 response: 00:11:56.355 { 00:11:56.355 "code": -32602, 00:11:56.355 "message": "Invalid MN SPDK_Controller\u001f" 00:11:56.355 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.355 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.356 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'EXDuo,?TC+fEbDN.Dr NJ' 00:11:56.620 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'EXDuo,?TC+fEbDN.Dr NJ' nqn.2016-06.io.spdk:cnode1458 00:11:56.620 [2024-07-24 17:54:42.872447] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1458: invalid serial number 'EXDuo,?TC+fEbDN.Dr NJ' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:56.879 { 00:11:56.879 "nqn": "nqn.2016-06.io.spdk:cnode1458", 00:11:56.879 "serial_number": "EXDuo,?TC+fEbDN.Dr NJ", 00:11:56.879 "method": "nvmf_create_subsystem", 00:11:56.879 "req_id": 1 00:11:56.879 } 00:11:56.879 Got JSON-RPC error response 00:11:56.879 response: 00:11:56.879 { 00:11:56.879 "code": -32602, 00:11:56.879 "message": "Invalid SN EXDuo,?TC+fEbDN.Dr NJ" 00:11:56.879 }' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:56.879 { 00:11:56.879 "nqn": "nqn.2016-06.io.spdk:cnode1458", 00:11:56.879 "serial_number": "EXDuo,?TC+fEbDN.Dr NJ", 00:11:56.879 "method": "nvmf_create_subsystem", 00:11:56.879 "req_id": 1 00:11:56.879 } 00:11:56.879 Got JSON-RPC error response 00:11:56.879 response: 00:11:56.879 { 00:11:56.879 "code": -32602, 00:11:56.879 "message": "Invalid SN EXDuo,?TC+fEbDN.Dr NJ" 00:11:56.879 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.879 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:56.880 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:56.881 17:54:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:11:56.881 17:54:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '02P?y:x;~)j%.m{(WF&1?!-Ngwf!ovM_ /dev/null' 00:11:59.718 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.633 00:12:01.633 real 0m9.242s 00:12:01.633 user 0m22.460s 00:12:01.633 sys 0m2.446s 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 ************************************ 00:12:01.633 END TEST nvmf_invalid 00:12:01.633 ************************************ 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 ************************************ 00:12:01.633 START TEST nvmf_connect_stress 00:12:01.633 ************************************ 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.633 * Looking for test storage... 00:12:01.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.633 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.634 17:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.165 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:04.166 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:04.166 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:04.166 Found net devices under 0000:09:00.0: cvl_0_0 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:04.166 Found net devices under 0000:09:00.1: cvl_0_1 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:12:04.166 00:12:04.166 --- 10.0.0.2 ping statistics --- 00:12:04.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.166 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:12:04.166 00:12:04.166 --- 10.0.0.1 ping statistics --- 00:12:04.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.166 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.166 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.167 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.167 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.167 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.167 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.167 17:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2750923 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2750923 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2750923 ']' 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 [2024-07-24 17:54:50.071201] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:04.167 [2024-07-24 17:54:50.071286] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.167 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.167 [2024-07-24 17:54:50.134839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.167 [2024-07-24 17:54:50.252946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.167 [2024-07-24 17:54:50.253004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.167 [2024-07-24 17:54:50.253032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.167 [2024-07-24 17:54:50.253047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.167 [2024-07-24 17:54:50.253059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.167 [2024-07-24 17:54:50.253181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.167 [2024-07-24 17:54:50.253266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.167 [2024-07-24 17:54:50.253269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 [2024-07-24 17:54:50.397182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 [2024-07-24 17:54:50.427217] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.167 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.425 NULL1 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2750957 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.425 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.426 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.684 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.684 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:04.684 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.684 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.684 17:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.941 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.941 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:04.941 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.941 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.941 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.199 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.199 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:05.199 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.199 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.199 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.765 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.765 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:05.765 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.765 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.765 17:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.022 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.022 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:06.022 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.022 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.022 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.281 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.281 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:06.281 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.281 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.281 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.538 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.538 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:06.538 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.538 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.538 17:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.795 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.795 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:06.795 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.795 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.795 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.357 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.357 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:07.357 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.357 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.357 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.612 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.612 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:07.612 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.612 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.612 17:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.869 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.869 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:07.869 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.869 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.869 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.126 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.126 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:08.127 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.127 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.127 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.693 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.693 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:08.693 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.693 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.693 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.950 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.950 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:08.950 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.950 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.950 17:54:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.208 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.209 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:09.209 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.209 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.209 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.465 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.465 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:09.465 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.465 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.465 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.721 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.721 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:09.721 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.721 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.721 17:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.287 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.287 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:10.287 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.287 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.287 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.544 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.544 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:10.544 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.544 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.544 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.800 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.800 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:10.800 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.800 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.800 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.057 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.057 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:11.057 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.057 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.057 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.316 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.316 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:11.316 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.316 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.316 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.880 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.881 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:11.881 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.881 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.881 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.137 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.137 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:12.137 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.137 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.137 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.395 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.395 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:12.395 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.395 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.395 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.652 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.652 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:12.652 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.652 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.652 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.909 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.909 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:12.909 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.909 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.909 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.473 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.473 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:13.473 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.473 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.473 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.730 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.730 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:13.730 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.730 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.730 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.987 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.987 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:13.987 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.987 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.987 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.244 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.244 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:14.244 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.244 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.244 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.499 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2750957 00:12:14.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2750957) - No such process 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2750957 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.757 rmmod nvme_tcp 00:12:14.757 rmmod nvme_fabrics 00:12:14.757 rmmod nvme_keyring 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2750923 ']' 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2750923 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2750923 ']' 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2750923 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750923 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750923' 00:12:14.757 killing process with pid 2750923 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2750923 00:12:14.757 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2750923 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.016 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.922 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.922 00:12:16.922 real 0m15.369s 00:12:16.922 user 0m38.264s 00:12:16.922 sys 0m6.015s 00:12:16.922 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.922 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.922 ************************************ 00:12:16.922 END TEST nvmf_connect_stress 00:12:16.922 ************************************ 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.180 ************************************ 00:12:17.180 START TEST nvmf_fused_ordering 00:12:17.180 ************************************ 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.180 * Looking for test storage... 00:12:17.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.180 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.181 17:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:19.710 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.710 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:19.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:19.711 Found net devices under 0000:09:00.0: cvl_0_0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:19.711 Found net devices under 0000:09:00.1: cvl_0_1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:12:19.711 00:12:19.711 --- 10.0.0.2 ping statistics --- 00:12:19.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.711 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:19.711 00:12:19.711 --- 10.0.0.1 ping statistics --- 00:12:19.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.711 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2754105 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2754105 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2754105 ']' 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.711 [2024-07-24 17:55:05.584299] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:19.711 [2024-07-24 17:55:05.584391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.711 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.711 [2024-07-24 17:55:05.647271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.711 [2024-07-24 17:55:05.754204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.711 [2024-07-24 17:55:05.754254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.711 [2024-07-24 17:55:05.754278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.711 [2024-07-24 17:55:05.754289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.711 [2024-07-24 17:55:05.754300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.711 [2024-07-24 17:55:05.754326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.711 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 [2024-07-24 17:55:05.910618] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 [2024-07-24 17:55:05.926795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 NULL1 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.712 17:55:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:19.712 [2024-07-24 17:55:05.972438] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:19.712 [2024-07-24 17:55:05.972478] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754239 ] 00:12:19.969 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.227 Attached to nqn.2016-06.io.spdk:cnode1 00:12:20.227 Namespace ID: 1 size: 1GB 00:12:20.227 fused_ordering(0) 00:12:20.227 fused_ordering(1) 00:12:20.227 fused_ordering(2) 00:12:20.227 fused_ordering(3) 00:12:20.227 fused_ordering(4) 00:12:20.227 fused_ordering(5) 00:12:20.227 fused_ordering(6) 00:12:20.227 fused_ordering(7) 00:12:20.227 fused_ordering(8) 00:12:20.227 fused_ordering(9) 00:12:20.227 fused_ordering(10) 00:12:20.227 fused_ordering(11) 00:12:20.227 fused_ordering(12) 00:12:20.227 fused_ordering(13) 00:12:20.227 fused_ordering(14) 00:12:20.227 fused_ordering(15) 00:12:20.227 fused_ordering(16) 00:12:20.227 fused_ordering(17) 00:12:20.227 fused_ordering(18) 00:12:20.227 fused_ordering(19) 00:12:20.227 fused_ordering(20) 00:12:20.227 fused_ordering(21) 00:12:20.227 fused_ordering(22) 00:12:20.227 fused_ordering(23) 00:12:20.227 fused_ordering(24) 00:12:20.227 fused_ordering(25) 00:12:20.227 fused_ordering(26) 00:12:20.227 fused_ordering(27) 00:12:20.227 fused_ordering(28) 00:12:20.227 fused_ordering(29) 00:12:20.227 fused_ordering(30) 00:12:20.227 fused_ordering(31) 00:12:20.227 fused_ordering(32) 00:12:20.227 fused_ordering(33) 00:12:20.227 fused_ordering(34) 00:12:20.227 fused_ordering(35) 00:12:20.227 fused_ordering(36) 00:12:20.227 fused_ordering(37) 00:12:20.227 fused_ordering(38) 00:12:20.227 fused_ordering(39) 00:12:20.227 fused_ordering(40) 00:12:20.227 fused_ordering(41) 00:12:20.227 fused_ordering(42) 00:12:20.227 fused_ordering(43) 00:12:20.227 fused_ordering(44) 00:12:20.227 fused_ordering(45) 00:12:20.227 fused_ordering(46) 00:12:20.227 fused_ordering(47) 00:12:20.227 fused_ordering(48) 00:12:20.227 fused_ordering(49) 00:12:20.227 fused_ordering(50) 00:12:20.227 fused_ordering(51) 00:12:20.227 fused_ordering(52) 00:12:20.227 fused_ordering(53) 00:12:20.227 fused_ordering(54) 00:12:20.227 fused_ordering(55) 00:12:20.227 fused_ordering(56) 00:12:20.227 fused_ordering(57) 00:12:20.227 fused_ordering(58) 00:12:20.227 fused_ordering(59) 00:12:20.227 fused_ordering(60) 00:12:20.227 fused_ordering(61) 00:12:20.227 fused_ordering(62) 00:12:20.227 fused_ordering(63) 00:12:20.227 fused_ordering(64) 00:12:20.227 fused_ordering(65) 00:12:20.227 fused_ordering(66) 00:12:20.227 fused_ordering(67) 00:12:20.227 fused_ordering(68) 00:12:20.227 fused_ordering(69) 00:12:20.227 fused_ordering(70) 00:12:20.227 fused_ordering(71) 00:12:20.227 fused_ordering(72) 00:12:20.227 fused_ordering(73) 00:12:20.227 fused_ordering(74) 00:12:20.227 fused_ordering(75) 00:12:20.227 fused_ordering(76) 00:12:20.227 fused_ordering(77) 00:12:20.227 fused_ordering(78) 00:12:20.227 fused_ordering(79) 00:12:20.227 fused_ordering(80) 00:12:20.227 fused_ordering(81) 00:12:20.227 fused_ordering(82) 00:12:20.227 fused_ordering(83) 00:12:20.227 fused_ordering(84) 00:12:20.227 fused_ordering(85) 00:12:20.227 fused_ordering(86) 00:12:20.227 fused_ordering(87) 00:12:20.227 fused_ordering(88) 00:12:20.227 fused_ordering(89) 00:12:20.227 fused_ordering(90) 00:12:20.227 fused_ordering(91) 00:12:20.227 fused_ordering(92) 00:12:20.227 fused_ordering(93) 00:12:20.227 fused_ordering(94) 00:12:20.227 fused_ordering(95) 00:12:20.227 fused_ordering(96) 00:12:20.228 fused_ordering(97) 00:12:20.228 fused_ordering(98) 00:12:20.228 fused_ordering(99) 00:12:20.228 fused_ordering(100) 00:12:20.228 fused_ordering(101) 00:12:20.228 fused_ordering(102) 00:12:20.228 fused_ordering(103) 00:12:20.228 fused_ordering(104) 00:12:20.228 fused_ordering(105) 00:12:20.228 fused_ordering(106) 00:12:20.228 fused_ordering(107) 00:12:20.228 fused_ordering(108) 00:12:20.228 fused_ordering(109) 00:12:20.228 fused_ordering(110) 00:12:20.228 fused_ordering(111) 00:12:20.228 fused_ordering(112) 00:12:20.228 fused_ordering(113) 00:12:20.228 fused_ordering(114) 00:12:20.228 fused_ordering(115) 00:12:20.228 fused_ordering(116) 00:12:20.228 fused_ordering(117) 00:12:20.228 fused_ordering(118) 00:12:20.228 fused_ordering(119) 00:12:20.228 fused_ordering(120) 00:12:20.228 fused_ordering(121) 00:12:20.228 fused_ordering(122) 00:12:20.228 fused_ordering(123) 00:12:20.228 fused_ordering(124) 00:12:20.228 fused_ordering(125) 00:12:20.228 fused_ordering(126) 00:12:20.228 fused_ordering(127) 00:12:20.228 fused_ordering(128) 00:12:20.228 fused_ordering(129) 00:12:20.228 fused_ordering(130) 00:12:20.228 fused_ordering(131) 00:12:20.228 fused_ordering(132) 00:12:20.228 fused_ordering(133) 00:12:20.228 fused_ordering(134) 00:12:20.228 fused_ordering(135) 00:12:20.228 fused_ordering(136) 00:12:20.228 fused_ordering(137) 00:12:20.228 fused_ordering(138) 00:12:20.228 fused_ordering(139) 00:12:20.228 fused_ordering(140) 00:12:20.228 fused_ordering(141) 00:12:20.228 fused_ordering(142) 00:12:20.228 fused_ordering(143) 00:12:20.228 fused_ordering(144) 00:12:20.228 fused_ordering(145) 00:12:20.228 fused_ordering(146) 00:12:20.228 fused_ordering(147) 00:12:20.228 fused_ordering(148) 00:12:20.228 fused_ordering(149) 00:12:20.228 fused_ordering(150) 00:12:20.228 fused_ordering(151) 00:12:20.228 fused_ordering(152) 00:12:20.228 fused_ordering(153) 00:12:20.228 fused_ordering(154) 00:12:20.228 fused_ordering(155) 00:12:20.228 fused_ordering(156) 00:12:20.228 fused_ordering(157) 00:12:20.228 fused_ordering(158) 00:12:20.228 fused_ordering(159) 00:12:20.228 fused_ordering(160) 00:12:20.228 fused_ordering(161) 00:12:20.228 fused_ordering(162) 00:12:20.228 fused_ordering(163) 00:12:20.228 fused_ordering(164) 00:12:20.228 fused_ordering(165) 00:12:20.228 fused_ordering(166) 00:12:20.228 fused_ordering(167) 00:12:20.228 fused_ordering(168) 00:12:20.228 fused_ordering(169) 00:12:20.228 fused_ordering(170) 00:12:20.228 fused_ordering(171) 00:12:20.228 fused_ordering(172) 00:12:20.228 fused_ordering(173) 00:12:20.228 fused_ordering(174) 00:12:20.228 fused_ordering(175) 00:12:20.228 fused_ordering(176) 00:12:20.228 fused_ordering(177) 00:12:20.228 fused_ordering(178) 00:12:20.228 fused_ordering(179) 00:12:20.228 fused_ordering(180) 00:12:20.228 fused_ordering(181) 00:12:20.228 fused_ordering(182) 00:12:20.228 fused_ordering(183) 00:12:20.228 fused_ordering(184) 00:12:20.228 fused_ordering(185) 00:12:20.228 fused_ordering(186) 00:12:20.228 fused_ordering(187) 00:12:20.228 fused_ordering(188) 00:12:20.228 fused_ordering(189) 00:12:20.228 fused_ordering(190) 00:12:20.228 fused_ordering(191) 00:12:20.228 fused_ordering(192) 00:12:20.228 fused_ordering(193) 00:12:20.228 fused_ordering(194) 00:12:20.228 fused_ordering(195) 00:12:20.228 fused_ordering(196) 00:12:20.228 fused_ordering(197) 00:12:20.228 fused_ordering(198) 00:12:20.228 fused_ordering(199) 00:12:20.228 fused_ordering(200) 00:12:20.228 fused_ordering(201) 00:12:20.228 fused_ordering(202) 00:12:20.228 fused_ordering(203) 00:12:20.228 fused_ordering(204) 00:12:20.228 fused_ordering(205) 00:12:20.794 fused_ordering(206) 00:12:20.794 fused_ordering(207) 00:12:20.794 fused_ordering(208) 00:12:20.794 fused_ordering(209) 00:12:20.794 fused_ordering(210) 00:12:20.794 fused_ordering(211) 00:12:20.794 fused_ordering(212) 00:12:20.794 fused_ordering(213) 00:12:20.794 fused_ordering(214) 00:12:20.794 fused_ordering(215) 00:12:20.794 fused_ordering(216) 00:12:20.794 fused_ordering(217) 00:12:20.794 fused_ordering(218) 00:12:20.794 fused_ordering(219) 00:12:20.794 fused_ordering(220) 00:12:20.794 fused_ordering(221) 00:12:20.794 fused_ordering(222) 00:12:20.794 fused_ordering(223) 00:12:20.794 fused_ordering(224) 00:12:20.794 fused_ordering(225) 00:12:20.794 fused_ordering(226) 00:12:20.794 fused_ordering(227) 00:12:20.794 fused_ordering(228) 00:12:20.794 fused_ordering(229) 00:12:20.794 fused_ordering(230) 00:12:20.794 fused_ordering(231) 00:12:20.794 fused_ordering(232) 00:12:20.794 fused_ordering(233) 00:12:20.794 fused_ordering(234) 00:12:20.794 fused_ordering(235) 00:12:20.794 fused_ordering(236) 00:12:20.794 fused_ordering(237) 00:12:20.794 fused_ordering(238) 00:12:20.794 fused_ordering(239) 00:12:20.794 fused_ordering(240) 00:12:20.794 fused_ordering(241) 00:12:20.794 fused_ordering(242) 00:12:20.794 fused_ordering(243) 00:12:20.794 fused_ordering(244) 00:12:20.794 fused_ordering(245) 00:12:20.794 fused_ordering(246) 00:12:20.794 fused_ordering(247) 00:12:20.794 fused_ordering(248) 00:12:20.794 fused_ordering(249) 00:12:20.794 fused_ordering(250) 00:12:20.794 fused_ordering(251) 00:12:20.794 fused_ordering(252) 00:12:20.794 fused_ordering(253) 00:12:20.794 fused_ordering(254) 00:12:20.794 fused_ordering(255) 00:12:20.794 fused_ordering(256) 00:12:20.794 fused_ordering(257) 00:12:20.794 fused_ordering(258) 00:12:20.794 fused_ordering(259) 00:12:20.794 fused_ordering(260) 00:12:20.794 fused_ordering(261) 00:12:20.794 fused_ordering(262) 00:12:20.794 fused_ordering(263) 00:12:20.794 fused_ordering(264) 00:12:20.794 fused_ordering(265) 00:12:20.794 fused_ordering(266) 00:12:20.794 fused_ordering(267) 00:12:20.794 fused_ordering(268) 00:12:20.794 fused_ordering(269) 00:12:20.794 fused_ordering(270) 00:12:20.794 fused_ordering(271) 00:12:20.794 fused_ordering(272) 00:12:20.794 fused_ordering(273) 00:12:20.794 fused_ordering(274) 00:12:20.794 fused_ordering(275) 00:12:20.794 fused_ordering(276) 00:12:20.794 fused_ordering(277) 00:12:20.794 fused_ordering(278) 00:12:20.794 fused_ordering(279) 00:12:20.794 fused_ordering(280) 00:12:20.794 fused_ordering(281) 00:12:20.794 fused_ordering(282) 00:12:20.794 fused_ordering(283) 00:12:20.794 fused_ordering(284) 00:12:20.794 fused_ordering(285) 00:12:20.794 fused_ordering(286) 00:12:20.794 fused_ordering(287) 00:12:20.794 fused_ordering(288) 00:12:20.794 fused_ordering(289) 00:12:20.794 fused_ordering(290) 00:12:20.794 fused_ordering(291) 00:12:20.794 fused_ordering(292) 00:12:20.794 fused_ordering(293) 00:12:20.794 fused_ordering(294) 00:12:20.794 fused_ordering(295) 00:12:20.794 fused_ordering(296) 00:12:20.794 fused_ordering(297) 00:12:20.794 fused_ordering(298) 00:12:20.794 fused_ordering(299) 00:12:20.794 fused_ordering(300) 00:12:20.794 fused_ordering(301) 00:12:20.794 fused_ordering(302) 00:12:20.794 fused_ordering(303) 00:12:20.794 fused_ordering(304) 00:12:20.794 fused_ordering(305) 00:12:20.794 fused_ordering(306) 00:12:20.794 fused_ordering(307) 00:12:20.794 fused_ordering(308) 00:12:20.794 fused_ordering(309) 00:12:20.794 fused_ordering(310) 00:12:20.794 fused_ordering(311) 00:12:20.794 fused_ordering(312) 00:12:20.794 fused_ordering(313) 00:12:20.794 fused_ordering(314) 00:12:20.794 fused_ordering(315) 00:12:20.794 fused_ordering(316) 00:12:20.794 fused_ordering(317) 00:12:20.794 fused_ordering(318) 00:12:20.794 fused_ordering(319) 00:12:20.794 fused_ordering(320) 00:12:20.794 fused_ordering(321) 00:12:20.794 fused_ordering(322) 00:12:20.794 fused_ordering(323) 00:12:20.794 fused_ordering(324) 00:12:20.794 fused_ordering(325) 00:12:20.794 fused_ordering(326) 00:12:20.794 fused_ordering(327) 00:12:20.794 fused_ordering(328) 00:12:20.794 fused_ordering(329) 00:12:20.794 fused_ordering(330) 00:12:20.794 fused_ordering(331) 00:12:20.794 fused_ordering(332) 00:12:20.794 fused_ordering(333) 00:12:20.794 fused_ordering(334) 00:12:20.794 fused_ordering(335) 00:12:20.794 fused_ordering(336) 00:12:20.794 fused_ordering(337) 00:12:20.794 fused_ordering(338) 00:12:20.794 fused_ordering(339) 00:12:20.794 fused_ordering(340) 00:12:20.794 fused_ordering(341) 00:12:20.794 fused_ordering(342) 00:12:20.794 fused_ordering(343) 00:12:20.794 fused_ordering(344) 00:12:20.794 fused_ordering(345) 00:12:20.794 fused_ordering(346) 00:12:20.794 fused_ordering(347) 00:12:20.794 fused_ordering(348) 00:12:20.794 fused_ordering(349) 00:12:20.794 fused_ordering(350) 00:12:20.794 fused_ordering(351) 00:12:20.794 fused_ordering(352) 00:12:20.794 fused_ordering(353) 00:12:20.794 fused_ordering(354) 00:12:20.794 fused_ordering(355) 00:12:20.794 fused_ordering(356) 00:12:20.794 fused_ordering(357) 00:12:20.794 fused_ordering(358) 00:12:20.794 fused_ordering(359) 00:12:20.794 fused_ordering(360) 00:12:20.794 fused_ordering(361) 00:12:20.794 fused_ordering(362) 00:12:20.794 fused_ordering(363) 00:12:20.794 fused_ordering(364) 00:12:20.794 fused_ordering(365) 00:12:20.794 fused_ordering(366) 00:12:20.794 fused_ordering(367) 00:12:20.794 fused_ordering(368) 00:12:20.794 fused_ordering(369) 00:12:20.794 fused_ordering(370) 00:12:20.794 fused_ordering(371) 00:12:20.794 fused_ordering(372) 00:12:20.794 fused_ordering(373) 00:12:20.794 fused_ordering(374) 00:12:20.794 fused_ordering(375) 00:12:20.794 fused_ordering(376) 00:12:20.794 fused_ordering(377) 00:12:20.794 fused_ordering(378) 00:12:20.794 fused_ordering(379) 00:12:20.794 fused_ordering(380) 00:12:20.794 fused_ordering(381) 00:12:20.794 fused_ordering(382) 00:12:20.794 fused_ordering(383) 00:12:20.794 fused_ordering(384) 00:12:20.794 fused_ordering(385) 00:12:20.794 fused_ordering(386) 00:12:20.794 fused_ordering(387) 00:12:20.794 fused_ordering(388) 00:12:20.794 fused_ordering(389) 00:12:20.794 fused_ordering(390) 00:12:20.794 fused_ordering(391) 00:12:20.794 fused_ordering(392) 00:12:20.794 fused_ordering(393) 00:12:20.794 fused_ordering(394) 00:12:20.794 fused_ordering(395) 00:12:20.794 fused_ordering(396) 00:12:20.794 fused_ordering(397) 00:12:20.794 fused_ordering(398) 00:12:20.794 fused_ordering(399) 00:12:20.794 fused_ordering(400) 00:12:20.794 fused_ordering(401) 00:12:20.794 fused_ordering(402) 00:12:20.794 fused_ordering(403) 00:12:20.794 fused_ordering(404) 00:12:20.794 fused_ordering(405) 00:12:20.794 fused_ordering(406) 00:12:20.794 fused_ordering(407) 00:12:20.794 fused_ordering(408) 00:12:20.794 fused_ordering(409) 00:12:20.794 fused_ordering(410) 00:12:21.358 fused_ordering(411) 00:12:21.358 fused_ordering(412) 00:12:21.358 fused_ordering(413) 00:12:21.358 fused_ordering(414) 00:12:21.358 fused_ordering(415) 00:12:21.358 fused_ordering(416) 00:12:21.358 fused_ordering(417) 00:12:21.358 fused_ordering(418) 00:12:21.358 fused_ordering(419) 00:12:21.358 fused_ordering(420) 00:12:21.358 fused_ordering(421) 00:12:21.358 fused_ordering(422) 00:12:21.358 fused_ordering(423) 00:12:21.358 fused_ordering(424) 00:12:21.358 fused_ordering(425) 00:12:21.358 fused_ordering(426) 00:12:21.358 fused_ordering(427) 00:12:21.358 fused_ordering(428) 00:12:21.358 fused_ordering(429) 00:12:21.358 fused_ordering(430) 00:12:21.358 fused_ordering(431) 00:12:21.358 fused_ordering(432) 00:12:21.358 fused_ordering(433) 00:12:21.358 fused_ordering(434) 00:12:21.358 fused_ordering(435) 00:12:21.358 fused_ordering(436) 00:12:21.358 fused_ordering(437) 00:12:21.358 fused_ordering(438) 00:12:21.358 fused_ordering(439) 00:12:21.358 fused_ordering(440) 00:12:21.358 fused_ordering(441) 00:12:21.358 fused_ordering(442) 00:12:21.358 fused_ordering(443) 00:12:21.358 fused_ordering(444) 00:12:21.358 fused_ordering(445) 00:12:21.358 fused_ordering(446) 00:12:21.358 fused_ordering(447) 00:12:21.358 fused_ordering(448) 00:12:21.358 fused_ordering(449) 00:12:21.358 fused_ordering(450) 00:12:21.358 fused_ordering(451) 00:12:21.358 fused_ordering(452) 00:12:21.358 fused_ordering(453) 00:12:21.358 fused_ordering(454) 00:12:21.358 fused_ordering(455) 00:12:21.358 fused_ordering(456) 00:12:21.358 fused_ordering(457) 00:12:21.358 fused_ordering(458) 00:12:21.358 fused_ordering(459) 00:12:21.358 fused_ordering(460) 00:12:21.358 fused_ordering(461) 00:12:21.358 fused_ordering(462) 00:12:21.358 fused_ordering(463) 00:12:21.358 fused_ordering(464) 00:12:21.358 fused_ordering(465) 00:12:21.358 fused_ordering(466) 00:12:21.358 fused_ordering(467) 00:12:21.358 fused_ordering(468) 00:12:21.358 fused_ordering(469) 00:12:21.358 fused_ordering(470) 00:12:21.358 fused_ordering(471) 00:12:21.358 fused_ordering(472) 00:12:21.358 fused_ordering(473) 00:12:21.358 fused_ordering(474) 00:12:21.358 fused_ordering(475) 00:12:21.358 fused_ordering(476) 00:12:21.358 fused_ordering(477) 00:12:21.358 fused_ordering(478) 00:12:21.359 fused_ordering(479) 00:12:21.359 fused_ordering(480) 00:12:21.359 fused_ordering(481) 00:12:21.359 fused_ordering(482) 00:12:21.359 fused_ordering(483) 00:12:21.359 fused_ordering(484) 00:12:21.359 fused_ordering(485) 00:12:21.359 fused_ordering(486) 00:12:21.359 fused_ordering(487) 00:12:21.359 fused_ordering(488) 00:12:21.359 fused_ordering(489) 00:12:21.359 fused_ordering(490) 00:12:21.359 fused_ordering(491) 00:12:21.359 fused_ordering(492) 00:12:21.359 fused_ordering(493) 00:12:21.359 fused_ordering(494) 00:12:21.359 fused_ordering(495) 00:12:21.359 fused_ordering(496) 00:12:21.359 fused_ordering(497) 00:12:21.359 fused_ordering(498) 00:12:21.359 fused_ordering(499) 00:12:21.359 fused_ordering(500) 00:12:21.359 fused_ordering(501) 00:12:21.359 fused_ordering(502) 00:12:21.359 fused_ordering(503) 00:12:21.359 fused_ordering(504) 00:12:21.359 fused_ordering(505) 00:12:21.359 fused_ordering(506) 00:12:21.359 fused_ordering(507) 00:12:21.359 fused_ordering(508) 00:12:21.359 fused_ordering(509) 00:12:21.359 fused_ordering(510) 00:12:21.359 fused_ordering(511) 00:12:21.359 fused_ordering(512) 00:12:21.359 fused_ordering(513) 00:12:21.359 fused_ordering(514) 00:12:21.359 fused_ordering(515) 00:12:21.359 fused_ordering(516) 00:12:21.359 fused_ordering(517) 00:12:21.359 fused_ordering(518) 00:12:21.359 fused_ordering(519) 00:12:21.359 fused_ordering(520) 00:12:21.359 fused_ordering(521) 00:12:21.359 fused_ordering(522) 00:12:21.359 fused_ordering(523) 00:12:21.359 fused_ordering(524) 00:12:21.359 fused_ordering(525) 00:12:21.359 fused_ordering(526) 00:12:21.359 fused_ordering(527) 00:12:21.359 fused_ordering(528) 00:12:21.359 fused_ordering(529) 00:12:21.359 fused_ordering(530) 00:12:21.359 fused_ordering(531) 00:12:21.359 fused_ordering(532) 00:12:21.359 fused_ordering(533) 00:12:21.359 fused_ordering(534) 00:12:21.359 fused_ordering(535) 00:12:21.359 fused_ordering(536) 00:12:21.359 fused_ordering(537) 00:12:21.359 fused_ordering(538) 00:12:21.359 fused_ordering(539) 00:12:21.359 fused_ordering(540) 00:12:21.359 fused_ordering(541) 00:12:21.359 fused_ordering(542) 00:12:21.359 fused_ordering(543) 00:12:21.359 fused_ordering(544) 00:12:21.359 fused_ordering(545) 00:12:21.359 fused_ordering(546) 00:12:21.359 fused_ordering(547) 00:12:21.359 fused_ordering(548) 00:12:21.359 fused_ordering(549) 00:12:21.359 fused_ordering(550) 00:12:21.359 fused_ordering(551) 00:12:21.359 fused_ordering(552) 00:12:21.359 fused_ordering(553) 00:12:21.359 fused_ordering(554) 00:12:21.359 fused_ordering(555) 00:12:21.359 fused_ordering(556) 00:12:21.359 fused_ordering(557) 00:12:21.359 fused_ordering(558) 00:12:21.359 fused_ordering(559) 00:12:21.359 fused_ordering(560) 00:12:21.359 fused_ordering(561) 00:12:21.359 fused_ordering(562) 00:12:21.359 fused_ordering(563) 00:12:21.359 fused_ordering(564) 00:12:21.359 fused_ordering(565) 00:12:21.359 fused_ordering(566) 00:12:21.359 fused_ordering(567) 00:12:21.359 fused_ordering(568) 00:12:21.359 fused_ordering(569) 00:12:21.359 fused_ordering(570) 00:12:21.359 fused_ordering(571) 00:12:21.359 fused_ordering(572) 00:12:21.359 fused_ordering(573) 00:12:21.359 fused_ordering(574) 00:12:21.359 fused_ordering(575) 00:12:21.359 fused_ordering(576) 00:12:21.359 fused_ordering(577) 00:12:21.359 fused_ordering(578) 00:12:21.359 fused_ordering(579) 00:12:21.359 fused_ordering(580) 00:12:21.359 fused_ordering(581) 00:12:21.359 fused_ordering(582) 00:12:21.359 fused_ordering(583) 00:12:21.359 fused_ordering(584) 00:12:21.359 fused_ordering(585) 00:12:21.359 fused_ordering(586) 00:12:21.359 fused_ordering(587) 00:12:21.359 fused_ordering(588) 00:12:21.359 fused_ordering(589) 00:12:21.359 fused_ordering(590) 00:12:21.359 fused_ordering(591) 00:12:21.359 fused_ordering(592) 00:12:21.359 fused_ordering(593) 00:12:21.359 fused_ordering(594) 00:12:21.359 fused_ordering(595) 00:12:21.359 fused_ordering(596) 00:12:21.359 fused_ordering(597) 00:12:21.359 fused_ordering(598) 00:12:21.359 fused_ordering(599) 00:12:21.359 fused_ordering(600) 00:12:21.359 fused_ordering(601) 00:12:21.359 fused_ordering(602) 00:12:21.359 fused_ordering(603) 00:12:21.359 fused_ordering(604) 00:12:21.359 fused_ordering(605) 00:12:21.359 fused_ordering(606) 00:12:21.359 fused_ordering(607) 00:12:21.359 fused_ordering(608) 00:12:21.359 fused_ordering(609) 00:12:21.359 fused_ordering(610) 00:12:21.359 fused_ordering(611) 00:12:21.359 fused_ordering(612) 00:12:21.359 fused_ordering(613) 00:12:21.359 fused_ordering(614) 00:12:21.359 fused_ordering(615) 00:12:21.923 fused_ordering(616) 00:12:21.923 fused_ordering(617) 00:12:21.923 fused_ordering(618) 00:12:21.923 fused_ordering(619) 00:12:21.923 fused_ordering(620) 00:12:21.923 fused_ordering(621) 00:12:21.923 fused_ordering(622) 00:12:21.923 fused_ordering(623) 00:12:21.923 fused_ordering(624) 00:12:21.923 fused_ordering(625) 00:12:21.923 fused_ordering(626) 00:12:21.923 fused_ordering(627) 00:12:21.923 fused_ordering(628) 00:12:21.923 fused_ordering(629) 00:12:21.923 fused_ordering(630) 00:12:21.923 fused_ordering(631) 00:12:21.923 fused_ordering(632) 00:12:21.923 fused_ordering(633) 00:12:21.923 fused_ordering(634) 00:12:21.923 fused_ordering(635) 00:12:21.923 fused_ordering(636) 00:12:21.923 fused_ordering(637) 00:12:21.923 fused_ordering(638) 00:12:21.923 fused_ordering(639) 00:12:21.923 fused_ordering(640) 00:12:21.923 fused_ordering(641) 00:12:21.923 fused_ordering(642) 00:12:21.923 fused_ordering(643) 00:12:21.923 fused_ordering(644) 00:12:21.923 fused_ordering(645) 00:12:21.923 fused_ordering(646) 00:12:21.923 fused_ordering(647) 00:12:21.923 fused_ordering(648) 00:12:21.923 fused_ordering(649) 00:12:21.923 fused_ordering(650) 00:12:21.923 fused_ordering(651) 00:12:21.923 fused_ordering(652) 00:12:21.923 fused_ordering(653) 00:12:21.923 fused_ordering(654) 00:12:21.923 fused_ordering(655) 00:12:21.923 fused_ordering(656) 00:12:21.923 fused_ordering(657) 00:12:21.923 fused_ordering(658) 00:12:21.923 fused_ordering(659) 00:12:21.923 fused_ordering(660) 00:12:21.923 fused_ordering(661) 00:12:21.923 fused_ordering(662) 00:12:21.923 fused_ordering(663) 00:12:21.923 fused_ordering(664) 00:12:21.923 fused_ordering(665) 00:12:21.923 fused_ordering(666) 00:12:21.923 fused_ordering(667) 00:12:21.923 fused_ordering(668) 00:12:21.923 fused_ordering(669) 00:12:21.923 fused_ordering(670) 00:12:21.923 fused_ordering(671) 00:12:21.923 fused_ordering(672) 00:12:21.923 fused_ordering(673) 00:12:21.923 fused_ordering(674) 00:12:21.923 fused_ordering(675) 00:12:21.923 fused_ordering(676) 00:12:21.923 fused_ordering(677) 00:12:21.923 fused_ordering(678) 00:12:21.923 fused_ordering(679) 00:12:21.923 fused_ordering(680) 00:12:21.923 fused_ordering(681) 00:12:21.923 fused_ordering(682) 00:12:21.923 fused_ordering(683) 00:12:21.923 fused_ordering(684) 00:12:21.923 fused_ordering(685) 00:12:21.923 fused_ordering(686) 00:12:21.923 fused_ordering(687) 00:12:21.923 fused_ordering(688) 00:12:21.923 fused_ordering(689) 00:12:21.923 fused_ordering(690) 00:12:21.923 fused_ordering(691) 00:12:21.923 fused_ordering(692) 00:12:21.923 fused_ordering(693) 00:12:21.923 fused_ordering(694) 00:12:21.923 fused_ordering(695) 00:12:21.923 fused_ordering(696) 00:12:21.923 fused_ordering(697) 00:12:21.923 fused_ordering(698) 00:12:21.923 fused_ordering(699) 00:12:21.923 fused_ordering(700) 00:12:21.923 fused_ordering(701) 00:12:21.923 fused_ordering(702) 00:12:21.923 fused_ordering(703) 00:12:21.923 fused_ordering(704) 00:12:21.923 fused_ordering(705) 00:12:21.923 fused_ordering(706) 00:12:21.923 fused_ordering(707) 00:12:21.923 fused_ordering(708) 00:12:21.923 fused_ordering(709) 00:12:21.923 fused_ordering(710) 00:12:21.923 fused_ordering(711) 00:12:21.923 fused_ordering(712) 00:12:21.923 fused_ordering(713) 00:12:21.923 fused_ordering(714) 00:12:21.923 fused_ordering(715) 00:12:21.923 fused_ordering(716) 00:12:21.923 fused_ordering(717) 00:12:21.923 fused_ordering(718) 00:12:21.923 fused_ordering(719) 00:12:21.923 fused_ordering(720) 00:12:21.923 fused_ordering(721) 00:12:21.923 fused_ordering(722) 00:12:21.923 fused_ordering(723) 00:12:21.923 fused_ordering(724) 00:12:21.923 fused_ordering(725) 00:12:21.923 fused_ordering(726) 00:12:21.923 fused_ordering(727) 00:12:21.923 fused_ordering(728) 00:12:21.923 fused_ordering(729) 00:12:21.923 fused_ordering(730) 00:12:21.924 fused_ordering(731) 00:12:21.924 fused_ordering(732) 00:12:21.924 fused_ordering(733) 00:12:21.924 fused_ordering(734) 00:12:21.924 fused_ordering(735) 00:12:21.924 fused_ordering(736) 00:12:21.924 fused_ordering(737) 00:12:21.924 fused_ordering(738) 00:12:21.924 fused_ordering(739) 00:12:21.924 fused_ordering(740) 00:12:21.924 fused_ordering(741) 00:12:21.924 fused_ordering(742) 00:12:21.924 fused_ordering(743) 00:12:21.924 fused_ordering(744) 00:12:21.924 fused_ordering(745) 00:12:21.924 fused_ordering(746) 00:12:21.924 fused_ordering(747) 00:12:21.924 fused_ordering(748) 00:12:21.924 fused_ordering(749) 00:12:21.924 fused_ordering(750) 00:12:21.924 fused_ordering(751) 00:12:21.924 fused_ordering(752) 00:12:21.924 fused_ordering(753) 00:12:21.924 fused_ordering(754) 00:12:21.924 fused_ordering(755) 00:12:21.924 fused_ordering(756) 00:12:21.924 fused_ordering(757) 00:12:21.924 fused_ordering(758) 00:12:21.924 fused_ordering(759) 00:12:21.924 fused_ordering(760) 00:12:21.924 fused_ordering(761) 00:12:21.924 fused_ordering(762) 00:12:21.924 fused_ordering(763) 00:12:21.924 fused_ordering(764) 00:12:21.924 fused_ordering(765) 00:12:21.924 fused_ordering(766) 00:12:21.924 fused_ordering(767) 00:12:21.924 fused_ordering(768) 00:12:21.924 fused_ordering(769) 00:12:21.924 fused_ordering(770) 00:12:21.924 fused_ordering(771) 00:12:21.924 fused_ordering(772) 00:12:21.924 fused_ordering(773) 00:12:21.924 fused_ordering(774) 00:12:21.924 fused_ordering(775) 00:12:21.924 fused_ordering(776) 00:12:21.924 fused_ordering(777) 00:12:21.924 fused_ordering(778) 00:12:21.924 fused_ordering(779) 00:12:21.924 fused_ordering(780) 00:12:21.924 fused_ordering(781) 00:12:21.924 fused_ordering(782) 00:12:21.924 fused_ordering(783) 00:12:21.924 fused_ordering(784) 00:12:21.924 fused_ordering(785) 00:12:21.924 fused_ordering(786) 00:12:21.924 fused_ordering(787) 00:12:21.924 fused_ordering(788) 00:12:21.924 fused_ordering(789) 00:12:21.924 fused_ordering(790) 00:12:21.924 fused_ordering(791) 00:12:21.924 fused_ordering(792) 00:12:21.924 fused_ordering(793) 00:12:21.924 fused_ordering(794) 00:12:21.924 fused_ordering(795) 00:12:21.924 fused_ordering(796) 00:12:21.924 fused_ordering(797) 00:12:21.924 fused_ordering(798) 00:12:21.924 fused_ordering(799) 00:12:21.924 fused_ordering(800) 00:12:21.924 fused_ordering(801) 00:12:21.924 fused_ordering(802) 00:12:21.924 fused_ordering(803) 00:12:21.924 fused_ordering(804) 00:12:21.924 fused_ordering(805) 00:12:21.924 fused_ordering(806) 00:12:21.924 fused_ordering(807) 00:12:21.924 fused_ordering(808) 00:12:21.924 fused_ordering(809) 00:12:21.924 fused_ordering(810) 00:12:21.924 fused_ordering(811) 00:12:21.924 fused_ordering(812) 00:12:21.924 fused_ordering(813) 00:12:21.924 fused_ordering(814) 00:12:21.924 fused_ordering(815) 00:12:21.924 fused_ordering(816) 00:12:21.924 fused_ordering(817) 00:12:21.924 fused_ordering(818) 00:12:21.924 fused_ordering(819) 00:12:21.924 fused_ordering(820) 00:12:22.855 fused_ordering(821) 00:12:22.855 fused_ordering(822) 00:12:22.855 fused_ordering(823) 00:12:22.855 fused_ordering(824) 00:12:22.855 fused_ordering(825) 00:12:22.855 fused_ordering(826) 00:12:22.855 fused_ordering(827) 00:12:22.855 fused_ordering(828) 00:12:22.855 fused_ordering(829) 00:12:22.855 fused_ordering(830) 00:12:22.855 fused_ordering(831) 00:12:22.855 fused_ordering(832) 00:12:22.855 fused_ordering(833) 00:12:22.855 fused_ordering(834) 00:12:22.855 fused_ordering(835) 00:12:22.855 fused_ordering(836) 00:12:22.855 fused_ordering(837) 00:12:22.855 fused_ordering(838) 00:12:22.855 fused_ordering(839) 00:12:22.855 fused_ordering(840) 00:12:22.855 fused_ordering(841) 00:12:22.855 fused_ordering(842) 00:12:22.855 fused_ordering(843) 00:12:22.855 fused_ordering(844) 00:12:22.855 fused_ordering(845) 00:12:22.855 fused_ordering(846) 00:12:22.855 fused_ordering(847) 00:12:22.855 fused_ordering(848) 00:12:22.855 fused_ordering(849) 00:12:22.855 fused_ordering(850) 00:12:22.855 fused_ordering(851) 00:12:22.855 fused_ordering(852) 00:12:22.855 fused_ordering(853) 00:12:22.855 fused_ordering(854) 00:12:22.855 fused_ordering(855) 00:12:22.855 fused_ordering(856) 00:12:22.855 fused_ordering(857) 00:12:22.855 fused_ordering(858) 00:12:22.855 fused_ordering(859) 00:12:22.855 fused_ordering(860) 00:12:22.855 fused_ordering(861) 00:12:22.855 fused_ordering(862) 00:12:22.855 fused_ordering(863) 00:12:22.855 fused_ordering(864) 00:12:22.855 fused_ordering(865) 00:12:22.855 fused_ordering(866) 00:12:22.855 fused_ordering(867) 00:12:22.855 fused_ordering(868) 00:12:22.855 fused_ordering(869) 00:12:22.855 fused_ordering(870) 00:12:22.855 fused_ordering(871) 00:12:22.855 fused_ordering(872) 00:12:22.855 fused_ordering(873) 00:12:22.855 fused_ordering(874) 00:12:22.855 fused_ordering(875) 00:12:22.855 fused_ordering(876) 00:12:22.855 fused_ordering(877) 00:12:22.855 fused_ordering(878) 00:12:22.855 fused_ordering(879) 00:12:22.855 fused_ordering(880) 00:12:22.855 fused_ordering(881) 00:12:22.855 fused_ordering(882) 00:12:22.855 fused_ordering(883) 00:12:22.855 fused_ordering(884) 00:12:22.855 fused_ordering(885) 00:12:22.855 fused_ordering(886) 00:12:22.855 fused_ordering(887) 00:12:22.855 fused_ordering(888) 00:12:22.855 fused_ordering(889) 00:12:22.855 fused_ordering(890) 00:12:22.855 fused_ordering(891) 00:12:22.855 fused_ordering(892) 00:12:22.855 fused_ordering(893) 00:12:22.855 fused_ordering(894) 00:12:22.855 fused_ordering(895) 00:12:22.855 fused_ordering(896) 00:12:22.855 fused_ordering(897) 00:12:22.855 fused_ordering(898) 00:12:22.855 fused_ordering(899) 00:12:22.855 fused_ordering(900) 00:12:22.855 fused_ordering(901) 00:12:22.855 fused_ordering(902) 00:12:22.855 fused_ordering(903) 00:12:22.855 fused_ordering(904) 00:12:22.855 fused_ordering(905) 00:12:22.855 fused_ordering(906) 00:12:22.855 fused_ordering(907) 00:12:22.855 fused_ordering(908) 00:12:22.855 fused_ordering(909) 00:12:22.855 fused_ordering(910) 00:12:22.855 fused_ordering(911) 00:12:22.855 fused_ordering(912) 00:12:22.855 fused_ordering(913) 00:12:22.855 fused_ordering(914) 00:12:22.855 fused_ordering(915) 00:12:22.855 fused_ordering(916) 00:12:22.855 fused_ordering(917) 00:12:22.855 fused_ordering(918) 00:12:22.855 fused_ordering(919) 00:12:22.855 fused_ordering(920) 00:12:22.855 fused_ordering(921) 00:12:22.855 fused_ordering(922) 00:12:22.855 fused_ordering(923) 00:12:22.855 fused_ordering(924) 00:12:22.855 fused_ordering(925) 00:12:22.855 fused_ordering(926) 00:12:22.855 fused_ordering(927) 00:12:22.855 fused_ordering(928) 00:12:22.855 fused_ordering(929) 00:12:22.855 fused_ordering(930) 00:12:22.855 fused_ordering(931) 00:12:22.855 fused_ordering(932) 00:12:22.855 fused_ordering(933) 00:12:22.855 fused_ordering(934) 00:12:22.855 fused_ordering(935) 00:12:22.855 fused_ordering(936) 00:12:22.855 fused_ordering(937) 00:12:22.856 fused_ordering(938) 00:12:22.856 fused_ordering(939) 00:12:22.856 fused_ordering(940) 00:12:22.856 fused_ordering(941) 00:12:22.856 fused_ordering(942) 00:12:22.856 fused_ordering(943) 00:12:22.856 fused_ordering(944) 00:12:22.856 fused_ordering(945) 00:12:22.856 fused_ordering(946) 00:12:22.856 fused_ordering(947) 00:12:22.856 fused_ordering(948) 00:12:22.856 fused_ordering(949) 00:12:22.856 fused_ordering(950) 00:12:22.856 fused_ordering(951) 00:12:22.856 fused_ordering(952) 00:12:22.856 fused_ordering(953) 00:12:22.856 fused_ordering(954) 00:12:22.856 fused_ordering(955) 00:12:22.856 fused_ordering(956) 00:12:22.856 fused_ordering(957) 00:12:22.856 fused_ordering(958) 00:12:22.856 fused_ordering(959) 00:12:22.856 fused_ordering(960) 00:12:22.856 fused_ordering(961) 00:12:22.856 fused_ordering(962) 00:12:22.856 fused_ordering(963) 00:12:22.856 fused_ordering(964) 00:12:22.856 fused_ordering(965) 00:12:22.856 fused_ordering(966) 00:12:22.856 fused_ordering(967) 00:12:22.856 fused_ordering(968) 00:12:22.856 fused_ordering(969) 00:12:22.856 fused_ordering(970) 00:12:22.856 fused_ordering(971) 00:12:22.856 fused_ordering(972) 00:12:22.856 fused_ordering(973) 00:12:22.856 fused_ordering(974) 00:12:22.856 fused_ordering(975) 00:12:22.856 fused_ordering(976) 00:12:22.856 fused_ordering(977) 00:12:22.856 fused_ordering(978) 00:12:22.856 fused_ordering(979) 00:12:22.856 fused_ordering(980) 00:12:22.856 fused_ordering(981) 00:12:22.856 fused_ordering(982) 00:12:22.856 fused_ordering(983) 00:12:22.856 fused_ordering(984) 00:12:22.856 fused_ordering(985) 00:12:22.856 fused_ordering(986) 00:12:22.856 fused_ordering(987) 00:12:22.856 fused_ordering(988) 00:12:22.856 fused_ordering(989) 00:12:22.856 fused_ordering(990) 00:12:22.856 fused_ordering(991) 00:12:22.856 fused_ordering(992) 00:12:22.856 fused_ordering(993) 00:12:22.856 fused_ordering(994) 00:12:22.856 fused_ordering(995) 00:12:22.856 fused_ordering(996) 00:12:22.856 fused_ordering(997) 00:12:22.856 fused_ordering(998) 00:12:22.856 fused_ordering(999) 00:12:22.856 fused_ordering(1000) 00:12:22.856 fused_ordering(1001) 00:12:22.856 fused_ordering(1002) 00:12:22.856 fused_ordering(1003) 00:12:22.856 fused_ordering(1004) 00:12:22.856 fused_ordering(1005) 00:12:22.856 fused_ordering(1006) 00:12:22.856 fused_ordering(1007) 00:12:22.856 fused_ordering(1008) 00:12:22.856 fused_ordering(1009) 00:12:22.856 fused_ordering(1010) 00:12:22.856 fused_ordering(1011) 00:12:22.856 fused_ordering(1012) 00:12:22.856 fused_ordering(1013) 00:12:22.856 fused_ordering(1014) 00:12:22.856 fused_ordering(1015) 00:12:22.856 fused_ordering(1016) 00:12:22.856 fused_ordering(1017) 00:12:22.856 fused_ordering(1018) 00:12:22.856 fused_ordering(1019) 00:12:22.856 fused_ordering(1020) 00:12:22.856 fused_ordering(1021) 00:12:22.856 fused_ordering(1022) 00:12:22.856 fused_ordering(1023) 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.856 17:55:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.856 rmmod nvme_tcp 00:12:22.856 rmmod nvme_fabrics 00:12:22.856 rmmod nvme_keyring 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2754105 ']' 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2754105 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2754105 ']' 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2754105 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2754105 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2754105' 00:12:22.856 killing process with pid 2754105 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2754105 00:12:22.856 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2754105 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.114 17:55:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.643 00:12:25.643 real 0m8.134s 00:12:25.643 user 0m5.646s 00:12:25.643 sys 0m3.723s 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.643 ************************************ 00:12:25.643 END TEST nvmf_fused_ordering 00:12:25.643 ************************************ 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.643 ************************************ 00:12:25.643 START TEST nvmf_ns_masking 00:12:25.643 ************************************ 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:25.643 * Looking for test storage... 00:12:25.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.643 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=13cd02de-661d-4c55-8d5f-c267d06b2d7a 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=25134a84-46a4-401d-ae3d-a7e7907cf315 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5a6be6d3-3d6d-44e7-b724-20f79b6fcbd6 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.644 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:27.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:27.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:27.543 Found net devices under 0000:09:00.0: cvl_0_0 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:27.543 Found net devices under 0000:09:00.1: cvl_0_1 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:27.543 00:12:27.543 --- 10.0.0.2 ping statistics --- 00:12:27.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.543 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:12:27.543 00:12:27.543 --- 10.0.0.1 ping statistics --- 00:12:27.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.543 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.543 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2756456 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2756456 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2756456 ']' 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.544 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.544 [2024-07-24 17:55:13.657828] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:27.544 [2024-07-24 17:55:13.657911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.544 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.544 [2024-07-24 17:55:13.725640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.800 [2024-07-24 17:55:13.851761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.800 [2024-07-24 17:55:13.851821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.800 [2024-07-24 17:55:13.851838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.800 [2024-07-24 17:55:13.851852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.800 [2024-07-24 17:55:13.851864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.800 [2024-07-24 17:55:13.851894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.800 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:28.057 [2024-07-24 17:55:14.226307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.057 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:28.057 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:28.057 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:28.314 Malloc1 00:12:28.314 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.878 Malloc2 00:12:28.878 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.135 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:29.393 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.651 [2024-07-24 17:55:15.699870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a6be6d3-3d6d-44e7-b724-20f79b6fcbd6 -a 10.0.0.2 -s 4420 -i 4 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:29.651 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:32.178 [ 0]:0x1 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0a0097a09204e4aaf840551709cbeaa 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0a0097a09204e4aaf840551709cbeaa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.178 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:32.178 [ 0]:0x1 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0a0097a09204e4aaf840551709cbeaa 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0a0097a09204e4aaf840551709cbeaa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:32.178 [ 1]:0x2 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:32.178 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.436 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.694 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:32.952 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:32.952 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a6be6d3-3d6d-44e7-b724-20f79b6fcbd6 -a 10.0.0.2 -s 4420 -i 4 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:12:33.210 17:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.110 [ 0]:0x2 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.110 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.369 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:35.369 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.369 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.628 [ 0]:0x1 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0a0097a09204e4aaf840551709cbeaa 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0a0097a09204e4aaf840551709cbeaa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.628 [ 1]:0x2 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.628 17:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.886 [ 0]:0x2 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.886 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.887 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:35.887 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.887 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:35.887 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.144 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.402 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:36.402 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5a6be6d3-3d6d-44e7-b724-20f79b6fcbd6 -a 10.0.0.2 -s 4420 -i 4 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:12:36.659 17:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:38.603 [ 0]:0x1 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a0a0097a09204e4aaf840551709cbeaa 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a0a0097a09204e4aaf840551709cbeaa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:38.603 [ 1]:0x2 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.603 17:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:38.861 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:38.862 [ 0]:0x2 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:38.862 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.119 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:39.119 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.119 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.119 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.119 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:39.120 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.378 [2024-07-24 17:55:25.429455] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:39.378 request: 00:12:39.378 { 00:12:39.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.378 "nsid": 2, 00:12:39.378 "host": "nqn.2016-06.io.spdk:host1", 00:12:39.378 "method": "nvmf_ns_remove_host", 00:12:39.378 "req_id": 1 00:12:39.378 } 00:12:39.378 Got JSON-RPC error response 00:12:39.378 response: 00:12:39.378 { 00:12:39.378 "code": -32602, 00:12:39.378 "message": "Invalid parameters" 00:12:39.378 } 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.378 [ 0]:0x2 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=670a5025457049fda912437e2463a6e0 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 670a5025457049fda912437e2463a6e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:39.378 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2758078 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2758078 /var/tmp/host.sock 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2758078 ']' 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.636 17:55:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.636 [2024-07-24 17:55:25.773326] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:39.636 [2024-07-24 17:55:25.773426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758078 ] 00:12:39.636 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.636 [2024-07-24 17:55:25.836363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.894 [2024-07-24 17:55:25.958711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.827 17:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.827 17:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:40.827 17:55:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.827 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.085 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 13cd02de-661d-4c55-8d5f-c267d06b2d7a 00:12:41.085 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:41.085 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 13CD02DE661D4C558D5FC267D06B2D7A -i 00:12:41.342 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 25134a84-46a4-401d-ae3d-a7e7907cf315 00:12:41.342 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:41.342 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 25134A8446A4401DAE3DA7E7907CF315 -i 00:12:41.599 17:55:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:41.856 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:42.114 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.114 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.680 nvme0n1 00:12:42.680 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.680 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.937 nvme1n2 00:12:42.937 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:42.937 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:42.937 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:42.937 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:42.937 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:43.194 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:43.194 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:43.194 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:43.194 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:43.453 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 13cd02de-661d-4c55-8d5f-c267d06b2d7a == \1\3\c\d\0\2\d\e\-\6\6\1\d\-\4\c\5\5\-\8\d\5\f\-\c\2\6\7\d\0\6\b\2\d\7\a ]] 00:12:43.453 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:43.453 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:43.453 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:43.710 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 25134a84-46a4-401d-ae3d-a7e7907cf315 == \2\5\1\3\4\a\8\4\-\4\6\a\4\-\4\0\1\d\-\a\e\3\d\-\a\7\e\7\9\0\7\c\f\3\1\5 ]] 00:12:43.710 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2758078 00:12:43.710 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2758078 ']' 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2758078 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2758078 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2758078' 00:12:43.711 killing process with pid 2758078 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2758078 00:12:43.711 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2758078 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.276 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.276 rmmod nvme_tcp 00:12:44.534 rmmod nvme_fabrics 00:12:44.534 rmmod nvme_keyring 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2756456 ']' 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2756456 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2756456 ']' 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2756456 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756456 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756456' 00:12:44.534 killing process with pid 2756456 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2756456 00:12:44.534 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2756456 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.793 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.325 00:12:47.325 real 0m21.588s 00:12:47.325 user 0m28.754s 00:12:47.325 sys 0m4.112s 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.325 ************************************ 00:12:47.325 END TEST nvmf_ns_masking 00:12:47.325 ************************************ 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.325 ************************************ 00:12:47.325 START TEST nvmf_nvme_cli 00:12:47.325 ************************************ 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:47.325 * Looking for test storage... 00:12:47.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.325 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:49.228 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:49.228 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.228 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:49.229 Found net devices under 0000:09:00.0: cvl_0_0 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:49.229 Found net devices under 0000:09:00.1: cvl_0_1 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:12:49.229 00:12:49.229 --- 10.0.0.2 ping statistics --- 00:12:49.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.229 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:12:49.229 00:12:49.229 --- 10.0.0.1 ping statistics --- 00:12:49.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.229 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2760575 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2760575 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2760575 ']' 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.229 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.229 [2024-07-24 17:55:35.291791] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:49.229 [2024-07-24 17:55:35.291899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.229 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.229 [2024-07-24 17:55:35.361532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.229 [2024-07-24 17:55:35.486233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.229 [2024-07-24 17:55:35.486296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.229 [2024-07-24 17:55:35.486312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.229 [2024-07-24 17:55:35.486326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.229 [2024-07-24 17:55:35.486338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.229 [2024-07-24 17:55:35.486436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.229 [2024-07-24 17:55:35.486499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.229 [2024-07-24 17:55:35.486551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.229 [2024-07-24 17:55:35.486554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 [2024-07-24 17:55:36.255941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 Malloc0 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 Malloc1 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 [2024-07-24 17:55:36.342020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.163 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:50.421 00:12:50.421 Discovery Log Number of Records 2, Generation counter 2 00:12:50.421 =====Discovery Log Entry 0====== 00:12:50.421 trtype: tcp 00:12:50.421 adrfam: ipv4 00:12:50.421 subtype: current discovery subsystem 00:12:50.421 treq: not required 00:12:50.421 portid: 0 00:12:50.421 trsvcid: 4420 00:12:50.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.421 traddr: 10.0.0.2 00:12:50.421 eflags: explicit discovery connections, duplicate discovery information 00:12:50.421 sectype: none 00:12:50.421 =====Discovery Log Entry 1====== 00:12:50.421 trtype: tcp 00:12:50.421 adrfam: ipv4 00:12:50.421 subtype: nvme subsystem 00:12:50.421 treq: not required 00:12:50.421 portid: 0 00:12:50.421 trsvcid: 4420 00:12:50.421 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:50.421 traddr: 10.0.0.2 00:12:50.421 eflags: none 00:12:50.421 sectype: none 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:50.421 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:12:50.985 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:53.509 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:53.510 /dev/nvme0n1 ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.510 rmmod nvme_tcp 00:12:53.510 rmmod nvme_fabrics 00:12:53.510 rmmod nvme_keyring 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2760575 ']' 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2760575 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2760575 ']' 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2760575 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2760575 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2760575' 00:12:53.510 killing process with pid 2760575 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2760575 00:12:53.510 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2760575 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.768 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.670 00:12:55.670 real 0m8.780s 00:12:55.670 user 0m17.575s 00:12:55.670 sys 0m2.239s 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:55.670 ************************************ 00:12:55.670 END TEST nvmf_nvme_cli 00:12:55.670 ************************************ 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.670 ************************************ 00:12:55.670 START TEST nvmf_vfio_user 00:12:55.670 ************************************ 00:12:55.670 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:55.670 * Looking for test storage... 00:12:55.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2761507 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2761507' 00:12:55.929 Process pid: 2761507 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2761507 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2761507 ']' 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.929 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:55.929 [2024-07-24 17:55:42.007149] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:55.929 [2024-07-24 17:55:42.007246] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.929 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.929 [2024-07-24 17:55:42.073839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.929 [2024-07-24 17:55:42.196892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.929 [2024-07-24 17:55:42.196949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.929 [2024-07-24 17:55:42.196965] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.929 [2024-07-24 17:55:42.196979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.929 [2024-07-24 17:55:42.196991] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.929 [2024-07-24 17:55:42.197055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.929 [2024-07-24 17:55:42.197085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.929 [2024-07-24 17:55:42.197125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.929 [2024-07-24 17:55:42.197129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.187 17:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.187 17:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:56.187 17:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:57.119 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:57.375 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:57.375 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:57.375 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:57.375 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:57.375 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:57.939 Malloc1 00:12:57.939 17:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:58.197 17:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:58.493 17:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:58.493 17:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:58.493 17:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:58.493 17:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:58.751 Malloc2 00:12:58.751 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:59.317 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:59.317 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:59.575 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:59.835 [2024-07-24 17:55:45.859114] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:12:59.835 [2024-07-24 17:55:45.859168] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762051 ] 00:12:59.835 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.835 [2024-07-24 17:55:45.891284] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:59.835 [2024-07-24 17:55:45.903445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:59.835 [2024-07-24 17:55:45.903474] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f39ef1ce000 00:12:59.835 [2024-07-24 17:55:45.904447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.905430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.906446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.907457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.908464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.909471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.910493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.911481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:59.835 [2024-07-24 17:55:45.912488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:59.835 [2024-07-24 17:55:45.912507] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f39ef1c3000 00:12:59.835 [2024-07-24 17:55:45.913852] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:59.835 [2024-07-24 17:55:45.933852] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:59.835 [2024-07-24 17:55:45.933888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:59.835 [2024-07-24 17:55:45.936622] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:59.835 [2024-07-24 17:55:45.936679] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:59.835 [2024-07-24 17:55:45.936769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:59.835 [2024-07-24 17:55:45.936794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:59.835 [2024-07-24 17:55:45.936804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:59.835 [2024-07-24 17:55:45.937615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:59.835 [2024-07-24 17:55:45.937638] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:59.835 [2024-07-24 17:55:45.937651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:59.835 [2024-07-24 17:55:45.938614] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:59.835 [2024-07-24 17:55:45.938634] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:59.835 [2024-07-24 17:55:45.938647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:59.835 [2024-07-24 17:55:45.939618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:59.835 [2024-07-24 17:55:45.939635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:59.836 [2024-07-24 17:55:45.940623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:59.836 [2024-07-24 17:55:45.940642] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:59.836 [2024-07-24 17:55:45.940651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:59.836 [2024-07-24 17:55:45.940663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:59.836 [2024-07-24 17:55:45.940772] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:59.836 [2024-07-24 17:55:45.940780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:59.836 [2024-07-24 17:55:45.940788] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:59.836 [2024-07-24 17:55:45.941626] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:59.836 [2024-07-24 17:55:45.942632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:59.836 [2024-07-24 17:55:45.943639] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:59.836 [2024-07-24 17:55:45.944639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.836 [2024-07-24 17:55:45.944772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:59.836 [2024-07-24 17:55:45.945659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:59.836 [2024-07-24 17:55:45.945677] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:59.836 [2024-07-24 17:55:45.945685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.945709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:59.836 [2024-07-24 17:55:45.945722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.945744] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.836 [2024-07-24 17:55:45.945753] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.836 [2024-07-24 17:55:45.945759] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.836 [2024-07-24 17:55:45.945777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.945855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.945871] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:59.836 [2024-07-24 17:55:45.945879] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:59.836 [2024-07-24 17:55:45.945887] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:59.836 [2024-07-24 17:55:45.945894] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:59.836 [2024-07-24 17:55:45.945906] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:59.836 [2024-07-24 17:55:45.945914] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:59.836 [2024-07-24 17:55:45.945922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.945934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.945953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.945971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.945992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.836 [2024-07-24 17:55:45.946006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.836 [2024-07-24 17:55:45.946018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.836 [2024-07-24 17:55:45.946029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.836 [2024-07-24 17:55:45.946037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.946087] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:59.836 [2024-07-24 17:55:45.946127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.946248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946277] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:59.836 [2024-07-24 17:55:45.946286] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:59.836 [2024-07-24 17:55:45.946292] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.836 [2024-07-24 17:55:45.946305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.946334] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:59.836 [2024-07-24 17:55:45.946353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946367] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946379] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.836 [2024-07-24 17:55:45.946388] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.836 [2024-07-24 17:55:45.946394] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.836 [2024-07-24 17:55:45.946409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.946472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946498] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:59.836 [2024-07-24 17:55:45.946506] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.836 [2024-07-24 17:55:45.946512] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.836 [2024-07-24 17:55:45.946521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:59.836 [2024-07-24 17:55:45.946550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946612] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:59.836 [2024-07-24 17:55:45.946619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:59.836 [2024-07-24 17:55:45.946631] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:59.836 [2024-07-24 17:55:45.946655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:59.836 [2024-07-24 17:55:45.946673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946779] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:59.837 [2024-07-24 17:55:45.946789] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:59.837 [2024-07-24 17:55:45.946795] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:59.837 [2024-07-24 17:55:45.946801] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:59.837 [2024-07-24 17:55:45.946807] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:59.837 [2024-07-24 17:55:45.946816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:59.837 [2024-07-24 17:55:45.946828] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:59.837 [2024-07-24 17:55:45.946836] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:59.837 [2024-07-24 17:55:45.946841] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.837 [2024-07-24 17:55:45.946850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946861] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:59.837 [2024-07-24 17:55:45.946869] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:59.837 [2024-07-24 17:55:45.946874] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.837 [2024-07-24 17:55:45.946883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946895] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:59.837 [2024-07-24 17:55:45.946903] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:59.837 [2024-07-24 17:55:45.946908] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:59.837 [2024-07-24 17:55:45.946917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:59.837 [2024-07-24 17:55:45.946928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:59.837 [2024-07-24 17:55:45.946982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:59.837 ===================================================== 00:12:59.837 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:59.837 ===================================================== 00:12:59.837 Controller Capabilities/Features 00:12:59.837 ================================ 00:12:59.837 Vendor ID: 4e58 00:12:59.837 Subsystem Vendor ID: 4e58 00:12:59.837 Serial Number: SPDK1 00:12:59.837 Model Number: SPDK bdev Controller 00:12:59.837 Firmware Version: 24.09 00:12:59.837 Recommended Arb Burst: 6 00:12:59.837 IEEE OUI Identifier: 8d 6b 50 00:12:59.837 Multi-path I/O 00:12:59.837 May have multiple subsystem ports: Yes 00:12:59.837 May have multiple controllers: Yes 00:12:59.837 Associated with SR-IOV VF: No 00:12:59.837 Max Data Transfer Size: 131072 00:12:59.837 Max Number of Namespaces: 32 00:12:59.837 Max Number of I/O Queues: 127 00:12:59.837 NVMe Specification Version (VS): 1.3 00:12:59.837 NVMe Specification Version (Identify): 1.3 00:12:59.837 Maximum Queue Entries: 256 00:12:59.837 Contiguous Queues Required: Yes 00:12:59.837 Arbitration Mechanisms Supported 00:12:59.837 Weighted Round Robin: Not Supported 00:12:59.837 Vendor Specific: Not Supported 00:12:59.837 Reset Timeout: 15000 ms 00:12:59.837 Doorbell Stride: 4 bytes 00:12:59.837 NVM Subsystem Reset: Not Supported 00:12:59.837 Command Sets Supported 00:12:59.837 NVM Command Set: Supported 00:12:59.837 Boot Partition: Not Supported 00:12:59.837 Memory Page Size Minimum: 4096 bytes 00:12:59.837 Memory Page Size Maximum: 4096 bytes 00:12:59.837 Persistent Memory Region: Not Supported 00:12:59.837 Optional Asynchronous Events Supported 00:12:59.837 Namespace Attribute Notices: Supported 00:12:59.837 Firmware Activation Notices: Not Supported 00:12:59.837 ANA Change Notices: Not Supported 00:12:59.837 PLE Aggregate Log Change Notices: Not Supported 00:12:59.837 LBA Status Info Alert Notices: Not Supported 00:12:59.837 EGE Aggregate Log Change Notices: Not Supported 00:12:59.837 Normal NVM Subsystem Shutdown event: Not Supported 00:12:59.837 Zone Descriptor Change Notices: Not Supported 00:12:59.837 Discovery Log Change Notices: Not Supported 00:12:59.837 Controller Attributes 00:12:59.837 128-bit Host Identifier: Supported 00:12:59.837 Non-Operational Permissive Mode: Not Supported 00:12:59.837 NVM Sets: Not Supported 00:12:59.837 Read Recovery Levels: Not Supported 00:12:59.837 Endurance Groups: Not Supported 00:12:59.837 Predictable Latency Mode: Not Supported 00:12:59.837 Traffic Based Keep ALive: Not Supported 00:12:59.837 Namespace Granularity: Not Supported 00:12:59.837 SQ Associations: Not Supported 00:12:59.837 UUID List: Not Supported 00:12:59.837 Multi-Domain Subsystem: Not Supported 00:12:59.837 Fixed Capacity Management: Not Supported 00:12:59.837 Variable Capacity Management: Not Supported 00:12:59.837 Delete Endurance Group: Not Supported 00:12:59.837 Delete NVM Set: Not Supported 00:12:59.837 Extended LBA Formats Supported: Not Supported 00:12:59.837 Flexible Data Placement Supported: Not Supported 00:12:59.837 00:12:59.837 Controller Memory Buffer Support 00:12:59.837 ================================ 00:12:59.837 Supported: No 00:12:59.837 00:12:59.837 Persistent Memory Region Support 00:12:59.837 ================================ 00:12:59.837 Supported: No 00:12:59.837 00:12:59.837 Admin Command Set Attributes 00:12:59.837 ============================ 00:12:59.837 Security Send/Receive: Not Supported 00:12:59.837 Format NVM: Not Supported 00:12:59.837 Firmware Activate/Download: Not Supported 00:12:59.837 Namespace Management: Not Supported 00:12:59.837 Device Self-Test: Not Supported 00:12:59.837 Directives: Not Supported 00:12:59.837 NVMe-MI: Not Supported 00:12:59.837 Virtualization Management: Not Supported 00:12:59.837 Doorbell Buffer Config: Not Supported 00:12:59.837 Get LBA Status Capability: Not Supported 00:12:59.837 Command & Feature Lockdown Capability: Not Supported 00:12:59.837 Abort Command Limit: 4 00:12:59.837 Async Event Request Limit: 4 00:12:59.837 Number of Firmware Slots: N/A 00:12:59.837 Firmware Slot 1 Read-Only: N/A 00:12:59.837 Firmware Activation Without Reset: N/A 00:12:59.837 Multiple Update Detection Support: N/A 00:12:59.837 Firmware Update Granularity: No Information Provided 00:12:59.837 Per-Namespace SMART Log: No 00:12:59.837 Asymmetric Namespace Access Log Page: Not Supported 00:12:59.837 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:59.837 Command Effects Log Page: Supported 00:12:59.837 Get Log Page Extended Data: Supported 00:12:59.837 Telemetry Log Pages: Not Supported 00:12:59.837 Persistent Event Log Pages: Not Supported 00:12:59.837 Supported Log Pages Log Page: May Support 00:12:59.837 Commands Supported & Effects Log Page: Not Supported 00:12:59.837 Feature Identifiers & Effects Log Page:May Support 00:12:59.837 NVMe-MI Commands & Effects Log Page: May Support 00:12:59.837 Data Area 4 for Telemetry Log: Not Supported 00:12:59.837 Error Log Page Entries Supported: 128 00:12:59.837 Keep Alive: Supported 00:12:59.837 Keep Alive Granularity: 10000 ms 00:12:59.837 00:12:59.837 NVM Command Set Attributes 00:12:59.837 ========================== 00:12:59.837 Submission Queue Entry Size 00:12:59.837 Max: 64 00:12:59.837 Min: 64 00:12:59.837 Completion Queue Entry Size 00:12:59.837 Max: 16 00:12:59.837 Min: 16 00:12:59.837 Number of Namespaces: 32 00:12:59.837 Compare Command: Supported 00:12:59.837 Write Uncorrectable Command: Not Supported 00:12:59.837 Dataset Management Command: Supported 00:12:59.837 Write Zeroes Command: Supported 00:12:59.837 Set Features Save Field: Not Supported 00:12:59.837 Reservations: Not Supported 00:12:59.837 Timestamp: Not Supported 00:12:59.837 Copy: Supported 00:12:59.837 Volatile Write Cache: Present 00:12:59.837 Atomic Write Unit (Normal): 1 00:12:59.837 Atomic Write Unit (PFail): 1 00:12:59.837 Atomic Compare & Write Unit: 1 00:12:59.837 Fused Compare & Write: Supported 00:12:59.837 Scatter-Gather List 00:12:59.838 SGL Command Set: Supported (Dword aligned) 00:12:59.838 SGL Keyed: Not Supported 00:12:59.838 SGL Bit Bucket Descriptor: Not Supported 00:12:59.838 SGL Metadata Pointer: Not Supported 00:12:59.838 Oversized SGL: Not Supported 00:12:59.838 SGL Metadata Address: Not Supported 00:12:59.838 SGL Offset: Not Supported 00:12:59.838 Transport SGL Data Block: Not Supported 00:12:59.838 Replay Protected Memory Block: Not Supported 00:12:59.838 00:12:59.838 Firmware Slot Information 00:12:59.838 ========================= 00:12:59.838 Active slot: 1 00:12:59.838 Slot 1 Firmware Revision: 24.09 00:12:59.838 00:12:59.838 00:12:59.838 Commands Supported and Effects 00:12:59.838 ============================== 00:12:59.838 Admin Commands 00:12:59.838 -------------- 00:12:59.838 Get Log Page (02h): Supported 00:12:59.838 Identify (06h): Supported 00:12:59.838 Abort (08h): Supported 00:12:59.838 Set Features (09h): Supported 00:12:59.838 Get Features (0Ah): Supported 00:12:59.838 Asynchronous Event Request (0Ch): Supported 00:12:59.838 Keep Alive (18h): Supported 00:12:59.838 I/O Commands 00:12:59.838 ------------ 00:12:59.838 Flush (00h): Supported LBA-Change 00:12:59.838 Write (01h): Supported LBA-Change 00:12:59.838 Read (02h): Supported 00:12:59.838 Compare (05h): Supported 00:12:59.838 Write Zeroes (08h): Supported LBA-Change 00:12:59.838 Dataset Management (09h): Supported LBA-Change 00:12:59.838 Copy (19h): Supported LBA-Change 00:12:59.838 00:12:59.838 Error Log 00:12:59.838 ========= 00:12:59.838 00:12:59.838 Arbitration 00:12:59.838 =========== 00:12:59.838 Arbitration Burst: 1 00:12:59.838 00:12:59.838 Power Management 00:12:59.838 ================ 00:12:59.838 Number of Power States: 1 00:12:59.838 Current Power State: Power State #0 00:12:59.838 Power State #0: 00:12:59.838 Max Power: 0.00 W 00:12:59.838 Non-Operational State: Operational 00:12:59.838 Entry Latency: Not Reported 00:12:59.838 Exit Latency: Not Reported 00:12:59.838 Relative Read Throughput: 0 00:12:59.838 Relative Read Latency: 0 00:12:59.838 Relative Write Throughput: 0 00:12:59.838 Relative Write Latency: 0 00:12:59.838 Idle Power: Not Reported 00:12:59.838 Active Power: Not Reported 00:12:59.838 Non-Operational Permissive Mode: Not Supported 00:12:59.838 00:12:59.838 Health Information 00:12:59.838 ================== 00:12:59.838 Critical Warnings: 00:12:59.838 Available Spare Space: OK 00:12:59.838 Temperature: OK 00:12:59.838 Device Reliability: OK 00:12:59.838 Read Only: No 00:12:59.838 Volatile Memory Backup: OK 00:12:59.838 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:59.838 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:59.838 Available Spare: 0% 00:12:59.838 Available Sp[2024-07-24 17:55:45.947125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:59.838 [2024-07-24 17:55:45.947143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:59.838 [2024-07-24 17:55:45.947196] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:59.838 [2024-07-24 17:55:45.947213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.838 [2024-07-24 17:55:45.947224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.838 [2024-07-24 17:55:45.947234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.838 [2024-07-24 17:55:45.947243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.838 [2024-07-24 17:55:45.950128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:59.838 [2024-07-24 17:55:45.950150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:59.838 [2024-07-24 17:55:45.950673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.838 [2024-07-24 17:55:45.950747] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:59.838 [2024-07-24 17:55:45.950760] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:59.838 [2024-07-24 17:55:45.951687] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:59.838 [2024-07-24 17:55:45.951710] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:59.838 [2024-07-24 17:55:45.951762] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:59.838 [2024-07-24 17:55:45.953722] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:59.838 are Threshold: 0% 00:12:59.838 Life Percentage Used: 0% 00:12:59.838 Data Units Read: 0 00:12:59.838 Data Units Written: 0 00:12:59.838 Host Read Commands: 0 00:12:59.838 Host Write Commands: 0 00:12:59.838 Controller Busy Time: 0 minutes 00:12:59.838 Power Cycles: 0 00:12:59.838 Power On Hours: 0 hours 00:12:59.838 Unsafe Shutdowns: 0 00:12:59.838 Unrecoverable Media Errors: 0 00:12:59.838 Lifetime Error Log Entries: 0 00:12:59.838 Warning Temperature Time: 0 minutes 00:12:59.838 Critical Temperature Time: 0 minutes 00:12:59.838 00:12:59.838 Number of Queues 00:12:59.838 ================ 00:12:59.838 Number of I/O Submission Queues: 127 00:12:59.838 Number of I/O Completion Queues: 127 00:12:59.838 00:12:59.838 Active Namespaces 00:12:59.838 ================= 00:12:59.838 Namespace ID:1 00:12:59.838 Error Recovery Timeout: Unlimited 00:12:59.838 Command Set Identifier: NVM (00h) 00:12:59.838 Deallocate: Supported 00:12:59.838 Deallocated/Unwritten Error: Not Supported 00:12:59.838 Deallocated Read Value: Unknown 00:12:59.838 Deallocate in Write Zeroes: Not Supported 00:12:59.838 Deallocated Guard Field: 0xFFFF 00:12:59.838 Flush: Supported 00:12:59.838 Reservation: Supported 00:12:59.838 Namespace Sharing Capabilities: Multiple Controllers 00:12:59.838 Size (in LBAs): 131072 (0GiB) 00:12:59.838 Capacity (in LBAs): 131072 (0GiB) 00:12:59.838 Utilization (in LBAs): 131072 (0GiB) 00:12:59.838 NGUID: 52443612A6CC4555B60C824715D0E386 00:12:59.838 UUID: 52443612-a6cc-4555-b60c-824715d0e386 00:12:59.838 Thin Provisioning: Not Supported 00:12:59.838 Per-NS Atomic Units: Yes 00:12:59.838 Atomic Boundary Size (Normal): 0 00:12:59.838 Atomic Boundary Size (PFail): 0 00:12:59.838 Atomic Boundary Offset: 0 00:12:59.838 Maximum Single Source Range Length: 65535 00:12:59.838 Maximum Copy Length: 65535 00:12:59.838 Maximum Source Range Count: 1 00:12:59.838 NGUID/EUI64 Never Reused: No 00:12:59.838 Namespace Write Protected: No 00:12:59.838 Number of LBA Formats: 1 00:12:59.838 Current LBA Format: LBA Format #00 00:12:59.838 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:59.838 00:12:59.838 17:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:59.838 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.096 [2024-07-24 17:55:46.183994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:05.360 Initializing NVMe Controllers 00:13:05.360 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:05.360 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:05.360 Initialization complete. Launching workers. 00:13:05.360 ======================================================== 00:13:05.360 Latency(us) 00:13:05.360 Device Information : IOPS MiB/s Average min max 00:13:05.360 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33799.08 132.03 3786.42 1178.39 7557.96 00:13:05.360 ======================================================== 00:13:05.360 Total : 33799.08 132.03 3786.42 1178.39 7557.96 00:13:05.360 00:13:05.360 [2024-07-24 17:55:51.206579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:05.360 17:55:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:05.360 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.360 [2024-07-24 17:55:51.446754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:10.622 Initializing NVMe Controllers 00:13:10.622 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:10.622 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:10.622 Initialization complete. Launching workers. 00:13:10.622 ======================================================== 00:13:10.622 Latency(us) 00:13:10.622 Device Information : IOPS MiB/s Average min max 00:13:10.622 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8006.61 5980.58 15962.98 00:13:10.622 ======================================================== 00:13:10.622 Total : 16000.00 62.50 8006.61 5980.58 15962.98 00:13:10.622 00:13:10.622 [2024-07-24 17:55:56.481057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:10.622 17:55:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:10.622 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.623 [2024-07-24 17:55:56.702147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.890 [2024-07-24 17:56:01.787576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.890 Initializing NVMe Controllers 00:13:15.890 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:15.890 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:15.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:15.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:15.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:15.890 Initialization complete. Launching workers. 00:13:15.890 Starting thread on core 2 00:13:15.890 Starting thread on core 3 00:13:15.890 Starting thread on core 1 00:13:15.890 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:15.890 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.890 [2024-07-24 17:56:02.083560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:19.178 [2024-07-24 17:56:05.154138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:19.178 Initializing NVMe Controllers 00:13:19.178 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:19.178 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:19.178 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:19.178 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:19.178 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:19.178 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:19.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:19.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:19.178 Initialization complete. Launching workers. 00:13:19.178 Starting thread on core 1 with urgent priority queue 00:13:19.178 Starting thread on core 2 with urgent priority queue 00:13:19.178 Starting thread on core 3 with urgent priority queue 00:13:19.178 Starting thread on core 0 with urgent priority queue 00:13:19.178 SPDK bdev Controller (SPDK1 ) core 0: 1652.67 IO/s 60.51 secs/100000 ios 00:13:19.178 SPDK bdev Controller (SPDK1 ) core 1: 2061.00 IO/s 48.52 secs/100000 ios 00:13:19.178 SPDK bdev Controller (SPDK1 ) core 2: 2161.00 IO/s 46.27 secs/100000 ios 00:13:19.178 SPDK bdev Controller (SPDK1 ) core 3: 1850.67 IO/s 54.03 secs/100000 ios 00:13:19.178 ======================================================== 00:13:19.178 00:13:19.178 17:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:19.178 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.436 [2024-07-24 17:56:05.450663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:19.437 Initializing NVMe Controllers 00:13:19.437 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:19.437 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:19.437 Namespace ID: 1 size: 0GB 00:13:19.437 Initialization complete. 00:13:19.437 INFO: using host memory buffer for IO 00:13:19.437 Hello world! 00:13:19.437 [2024-07-24 17:56:05.485235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:19.437 17:56:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:19.437 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.694 [2024-07-24 17:56:05.783594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:20.628 Initializing NVMe Controllers 00:13:20.628 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.628 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:20.628 Initialization complete. Launching workers. 00:13:20.628 submit (in ns) avg, min, max = 7920.3, 3555.6, 4014902.2 00:13:20.628 complete (in ns) avg, min, max = 26679.0, 2067.8, 4018181.1 00:13:20.628 00:13:20.628 Submit histogram 00:13:20.628 ================ 00:13:20.628 Range in us Cumulative Count 00:13:20.628 3.556 - 3.579: 2.3779% ( 312) 00:13:20.628 3.579 - 3.603: 9.7858% ( 972) 00:13:20.629 3.603 - 3.627: 18.4590% ( 1138) 00:13:20.629 3.627 - 3.650: 27.9933% ( 1251) 00:13:20.629 3.650 - 3.674: 36.7579% ( 1150) 00:13:20.629 3.674 - 3.698: 44.9127% ( 1070) 00:13:20.629 3.698 - 3.721: 51.3909% ( 850) 00:13:20.629 3.721 - 3.745: 56.0704% ( 614) 00:13:20.629 3.745 - 3.769: 59.3857% ( 435) 00:13:20.629 3.769 - 3.793: 62.7696% ( 444) 00:13:20.629 3.793 - 3.816: 65.7648% ( 393) 00:13:20.629 3.816 - 3.840: 69.4917% ( 489) 00:13:20.629 3.840 - 3.864: 73.6072% ( 540) 00:13:20.629 3.864 - 3.887: 77.8371% ( 555) 00:13:20.629 3.887 - 3.911: 82.0364% ( 551) 00:13:20.629 3.911 - 3.935: 84.8259% ( 366) 00:13:20.629 3.935 - 3.959: 86.8989% ( 272) 00:13:20.629 3.959 - 3.982: 88.7661% ( 245) 00:13:20.629 3.982 - 4.006: 90.2446% ( 194) 00:13:20.629 4.006 - 4.030: 91.3040% ( 139) 00:13:20.629 4.030 - 4.053: 92.4167% ( 146) 00:13:20.629 4.053 - 4.077: 93.4532% ( 136) 00:13:20.629 4.077 - 4.101: 94.2611% ( 106) 00:13:20.629 4.101 - 4.124: 94.9394% ( 89) 00:13:20.629 4.124 - 4.148: 95.4424% ( 66) 00:13:20.629 4.148 - 4.172: 95.9226% ( 63) 00:13:20.629 4.172 - 4.196: 96.1741% ( 33) 00:13:20.629 4.196 - 4.219: 96.4027% ( 30) 00:13:20.629 4.219 - 4.243: 96.6009% ( 26) 00:13:20.629 4.243 - 4.267: 96.7381% ( 18) 00:13:20.629 4.267 - 4.290: 96.9286% ( 25) 00:13:20.629 4.290 - 4.314: 97.0277% ( 13) 00:13:20.629 4.314 - 4.338: 97.0658% ( 5) 00:13:20.629 4.338 - 4.361: 97.1344% ( 9) 00:13:20.629 4.361 - 4.385: 97.1877% ( 7) 00:13:20.629 4.385 - 4.409: 97.2334% ( 6) 00:13:20.629 4.409 - 4.433: 97.2487% ( 2) 00:13:20.629 4.433 - 4.456: 97.2563% ( 1) 00:13:20.629 4.456 - 4.480: 97.2639% ( 1) 00:13:20.629 4.480 - 4.504: 97.2792% ( 2) 00:13:20.629 4.527 - 4.551: 97.2944% ( 2) 00:13:20.629 4.551 - 4.575: 97.3020% ( 1) 00:13:20.629 4.599 - 4.622: 97.3249% ( 3) 00:13:20.629 4.646 - 4.670: 97.3325% ( 1) 00:13:20.629 4.670 - 4.693: 97.3478% ( 2) 00:13:20.629 4.693 - 4.717: 97.3782% ( 4) 00:13:20.629 4.717 - 4.741: 97.4011% ( 3) 00:13:20.629 4.741 - 4.764: 97.4468% ( 6) 00:13:20.629 4.764 - 4.788: 97.4621% ( 2) 00:13:20.629 4.788 - 4.812: 97.5154% ( 7) 00:13:20.629 4.812 - 4.836: 97.5840% ( 9) 00:13:20.629 4.836 - 4.859: 97.6298% ( 6) 00:13:20.629 4.859 - 4.883: 97.6755% ( 6) 00:13:20.629 4.883 - 4.907: 97.7212% ( 6) 00:13:20.629 4.907 - 4.930: 97.7669% ( 6) 00:13:20.629 4.930 - 4.954: 97.7898% ( 3) 00:13:20.629 4.954 - 4.978: 97.8127% ( 3) 00:13:20.629 4.978 - 5.001: 97.8432% ( 4) 00:13:20.629 5.001 - 5.025: 97.8584% ( 2) 00:13:20.629 5.025 - 5.049: 97.8965% ( 5) 00:13:20.629 5.049 - 5.073: 97.9194% ( 3) 00:13:20.629 5.073 - 5.096: 97.9346% ( 2) 00:13:20.629 5.096 - 5.120: 97.9575% ( 3) 00:13:20.629 5.144 - 5.167: 97.9727% ( 2) 00:13:20.629 5.167 - 5.191: 97.9956% ( 3) 00:13:20.629 5.191 - 5.215: 98.0108% ( 2) 00:13:20.629 5.239 - 5.262: 98.0184% ( 1) 00:13:20.629 5.286 - 5.310: 98.0261% ( 1) 00:13:20.629 5.310 - 5.333: 98.0337% ( 1) 00:13:20.629 5.404 - 5.428: 98.0413% ( 1) 00:13:20.629 5.428 - 5.452: 98.0489% ( 1) 00:13:20.629 5.618 - 5.641: 98.0566% ( 1) 00:13:20.629 5.760 - 5.784: 98.0718% ( 2) 00:13:20.629 5.784 - 5.807: 98.0870% ( 2) 00:13:20.629 5.855 - 5.879: 98.1023% ( 2) 00:13:20.629 5.902 - 5.926: 98.1175% ( 2) 00:13:20.629 6.116 - 6.163: 98.1251% ( 1) 00:13:20.629 6.163 - 6.210: 98.1328% ( 1) 00:13:20.629 6.210 - 6.258: 98.1404% ( 1) 00:13:20.629 6.305 - 6.353: 98.1556% ( 2) 00:13:20.629 6.353 - 6.400: 98.1709% ( 2) 00:13:20.629 6.447 - 6.495: 98.1785% ( 1) 00:13:20.629 6.590 - 6.637: 98.1937% ( 2) 00:13:20.629 6.637 - 6.684: 98.2014% ( 1) 00:13:20.629 6.732 - 6.779: 98.2090% ( 1) 00:13:20.629 6.779 - 6.827: 98.2166% ( 1) 00:13:20.629 6.827 - 6.874: 98.2242% ( 1) 00:13:20.629 6.874 - 6.921: 98.2318% ( 1) 00:13:20.629 6.969 - 7.016: 98.2471% ( 2) 00:13:20.629 7.016 - 7.064: 98.2547% ( 1) 00:13:20.629 7.064 - 7.111: 98.2699% ( 2) 00:13:20.629 7.111 - 7.159: 98.2776% ( 1) 00:13:20.629 7.159 - 7.206: 98.2852% ( 1) 00:13:20.629 7.206 - 7.253: 98.2928% ( 1) 00:13:20.629 7.396 - 7.443: 98.3157% ( 3) 00:13:20.629 7.443 - 7.490: 98.3309% ( 2) 00:13:20.629 7.490 - 7.538: 98.3462% ( 2) 00:13:20.629 7.538 - 7.585: 98.3538% ( 1) 00:13:20.629 7.585 - 7.633: 98.3690% ( 2) 00:13:20.629 7.680 - 7.727: 98.3766% ( 1) 00:13:20.629 7.775 - 7.822: 98.3843% ( 1) 00:13:20.629 7.822 - 7.870: 98.3995% ( 2) 00:13:20.629 7.917 - 7.964: 98.4071% ( 1) 00:13:20.629 7.964 - 8.012: 98.4148% ( 1) 00:13:20.629 8.012 - 8.059: 98.4376% ( 3) 00:13:20.629 8.059 - 8.107: 98.4529% ( 2) 00:13:20.629 8.107 - 8.154: 98.4605% ( 1) 00:13:20.629 8.249 - 8.296: 98.4681% ( 1) 00:13:20.629 8.344 - 8.391: 98.4757% ( 1) 00:13:20.629 8.391 - 8.439: 98.5062% ( 4) 00:13:20.629 8.439 - 8.486: 98.5215% ( 2) 00:13:20.629 8.486 - 8.533: 98.5291% ( 1) 00:13:20.629 8.581 - 8.628: 98.5443% ( 2) 00:13:20.629 8.628 - 8.676: 98.5519% ( 1) 00:13:20.629 8.723 - 8.770: 98.5596% ( 1) 00:13:20.629 8.770 - 8.818: 98.5672% ( 1) 00:13:20.629 8.818 - 8.865: 98.5748% ( 1) 00:13:20.629 8.913 - 8.960: 98.5824% ( 1) 00:13:20.629 8.960 - 9.007: 98.5900% ( 1) 00:13:20.629 9.007 - 9.055: 98.5977% ( 1) 00:13:20.629 9.102 - 9.150: 98.6053% ( 1) 00:13:20.629 9.197 - 9.244: 98.6129% ( 1) 00:13:20.629 9.244 - 9.292: 98.6205% ( 1) 00:13:20.629 9.387 - 9.434: 98.6282% ( 1) 00:13:20.629 9.481 - 9.529: 98.6358% ( 1) 00:13:20.629 9.529 - 9.576: 98.6434% ( 1) 00:13:20.629 9.576 - 9.624: 98.6510% ( 1) 00:13:20.629 9.861 - 9.908: 98.6586% ( 1) 00:13:20.629 9.956 - 10.003: 98.6663% ( 1) 00:13:20.629 10.145 - 10.193: 98.6739% ( 1) 00:13:20.629 10.193 - 10.240: 98.6815% ( 1) 00:13:20.629 10.240 - 10.287: 98.6891% ( 1) 00:13:20.629 10.335 - 10.382: 98.7044% ( 2) 00:13:20.629 10.382 - 10.430: 98.7196% ( 2) 00:13:20.629 10.477 - 10.524: 98.7272% ( 1) 00:13:20.629 10.619 - 10.667: 98.7349% ( 1) 00:13:20.629 10.809 - 10.856: 98.7425% ( 1) 00:13:20.629 10.999 - 11.046: 98.7501% ( 1) 00:13:20.629 11.236 - 11.283: 98.7577% ( 1) 00:13:20.629 11.378 - 11.425: 98.7653% ( 1) 00:13:20.629 11.520 - 11.567: 98.7806% ( 2) 00:13:20.629 11.615 - 11.662: 98.7882% ( 1) 00:13:20.629 11.804 - 11.852: 98.7958% ( 1) 00:13:20.629 12.136 - 12.231: 98.8034% ( 1) 00:13:20.629 12.231 - 12.326: 98.8263% ( 3) 00:13:20.629 12.516 - 12.610: 98.8339% ( 1) 00:13:20.629 12.610 - 12.705: 98.8416% ( 1) 00:13:20.629 12.800 - 12.895: 98.8492% ( 1) 00:13:20.629 12.990 - 13.084: 98.8568% ( 1) 00:13:20.629 13.084 - 13.179: 98.8644% ( 1) 00:13:20.629 13.179 - 13.274: 98.8720% ( 1) 00:13:20.629 13.274 - 13.369: 98.8797% ( 1) 00:13:20.629 13.464 - 13.559: 98.8873% ( 1) 00:13:20.629 13.653 - 13.748: 98.8949% ( 1) 00:13:20.629 14.033 - 14.127: 98.9101% ( 2) 00:13:20.629 14.222 - 14.317: 98.9254% ( 2) 00:13:20.629 15.455 - 15.550: 98.9330% ( 1) 00:13:20.629 16.972 - 17.067: 98.9406% ( 1) 00:13:20.629 17.067 - 17.161: 98.9483% ( 1) 00:13:20.629 17.351 - 17.446: 98.9864% ( 5) 00:13:20.629 17.446 - 17.541: 98.9940% ( 1) 00:13:20.629 17.541 - 17.636: 99.0168% ( 3) 00:13:20.629 17.636 - 17.730: 99.0854% ( 9) 00:13:20.629 17.730 - 17.825: 99.1616% ( 10) 00:13:20.629 17.825 - 17.920: 99.1998% ( 5) 00:13:20.629 17.920 - 18.015: 99.2226% ( 3) 00:13:20.629 18.015 - 18.110: 99.2988% ( 10) 00:13:20.629 18.110 - 18.204: 99.3446% ( 6) 00:13:20.629 18.204 - 18.299: 99.3750% ( 4) 00:13:20.629 18.299 - 18.394: 99.4360% ( 8) 00:13:20.629 18.394 - 18.489: 99.5046% ( 9) 00:13:20.630 18.489 - 18.584: 99.6037% ( 13) 00:13:20.630 18.584 - 18.679: 99.6266% ( 3) 00:13:20.630 18.679 - 18.773: 99.6799% ( 7) 00:13:20.630 18.773 - 18.868: 99.7104% ( 4) 00:13:20.630 18.868 - 18.963: 99.7333% ( 3) 00:13:20.630 18.963 - 19.058: 99.7409% ( 1) 00:13:20.630 19.058 - 19.153: 99.7485% ( 1) 00:13:20.630 19.153 - 19.247: 99.7561% ( 1) 00:13:20.630 19.247 - 19.342: 99.7637% ( 1) 00:13:20.630 19.342 - 19.437: 99.7714% ( 1) 00:13:20.630 19.437 - 19.532: 99.7866% ( 2) 00:13:20.630 19.532 - 19.627: 99.7942% ( 1) 00:13:20.630 19.627 - 19.721: 99.8018% ( 1) 00:13:20.630 19.816 - 19.911: 99.8171% ( 2) 00:13:20.630 20.006 - 20.101: 99.8247% ( 1) 00:13:20.630 21.049 - 21.144: 99.8323% ( 1) 00:13:20.630 21.144 - 21.239: 99.8400% ( 1) 00:13:20.630 21.902 - 21.997: 99.8476% ( 1) 00:13:20.630 22.092 - 22.187: 99.8552% ( 1) 00:13:20.630 23.135 - 23.230: 99.8628% ( 1) 00:13:20.630 24.652 - 24.841: 99.8704% ( 1) 00:13:20.630 26.548 - 26.738: 99.8781% ( 1) 00:13:20.630 27.117 - 27.307: 99.8857% ( 1) 00:13:20.630 27.496 - 27.686: 99.8933% ( 1) 00:13:20.630 27.876 - 28.065: 99.9009% ( 1) 00:13:20.630 3980.705 - 4004.978: 99.9924% ( 12) 00:13:20.630 4004.978 - 4029.250: 100.0000% ( 1) 00:13:20.630 00:13:20.630 Complete histogram 00:13:20.630 ================== 00:13:20.630 Range in us Cumulative Count 00:13:20.630 2.062 - 2.074: 1.0441% ( 137) 00:13:20.630 2.074 - 2.086: 31.1714% ( 3953) 00:13:20.630 2.086 - 2.098: 45.0423% ( 1820) 00:13:20.630 2.098 - 2.110: 48.1366% ( 406) 00:13:20.630 2.110 - 2.121: 58.8370% ( 1404) 00:13:20.630 2.121 - 2.133: 61.5426% ( 355) 00:13:20.630 2.133 - 2.145: 65.4142% ( 508) 00:13:20.630 2.145 - 2.157: 74.4989% ( 1192) 00:13:20.630 2.157 - 2.169: 76.1680% ( 219) 00:13:20.630 2.169 - 2.181: 78.3020% ( 280) 00:13:20.630 2.181 - 2.193: 81.7468% ( 452) 00:13:20.630 2.193 - 2.204: 82.4785% ( 96) 00:13:20.630 2.204 - 2.216: 83.5226% ( 137) 00:13:20.630 2.216 - 2.228: 87.6915% ( 547) 00:13:20.630 2.228 - 2.240: 90.0998% ( 316) 00:13:20.630 2.240 - 2.252: 91.6317% ( 201) 00:13:20.630 2.252 - 2.264: 93.4380% ( 237) 00:13:20.630 2.264 - 2.276: 93.8191% ( 50) 00:13:20.630 2.276 - 2.287: 94.0401% ( 29) 00:13:20.630 2.287 - 2.299: 94.4059% ( 48) 00:13:20.630 2.299 - 2.311: 95.0156% ( 80) 00:13:20.630 2.311 - 2.323: 95.4958% ( 63) 00:13:20.630 2.323 - 2.335: 95.6482% ( 20) 00:13:20.630 2.335 - 2.347: 95.8006% ( 20) 00:13:20.630 2.347 - 2.359: 96.0598% ( 34) 00:13:20.630 2.359 - 2.370: 96.3646% ( 40) 00:13:20.630 2.370 - 2.382: 96.7685% ( 53) 00:13:20.630 2.382 - 2.394: 97.3097% ( 71) 00:13:20.630 2.394 - 2.406: 97.6450% ( 44) 00:13:20.630 2.406 - 2.418: 97.7746% ( 17) 00:13:20.630 2.418 - 2.430: 97.8965% ( 16) 00:13:20.630 2.430 - 2.441: 98.0261% ( 17) 00:13:20.630 2.441 - 2.453: 98.1404% ( 15) 00:13:20.630 2.453 - 2.465: 98.2166% ( 10) 00:13:20.630 2.465 - 2.477: 98.2928% ( 10) 00:13:20.630 2.477 - 2.489: 98.3614% ( 9) 00:13:20.630 2.489 - 2.501: 98.3995% ( 5) 00:13:20.630 2.501 - 2.513: 98.4452% ( 6) 00:13:20.630 2.513 - 2.524: 98.4681% ( 3) 00:13:20.630 2.536 - 2.548: 98.4833% ( 2) 00:13:20.630 2.548 - 2.560: 98.4910% ( 1) 00:13:20.630 2.643 - 2.655: 98.4986% ( 1) 00:13:20.630 2.714 - 2.726: 98.5062% ( 1) 00:13:20.630 2.738 - 2.750: 98.5138% ( 1) 00:13:20.630 2.785 - 2.797: 98.5215% ( 1) 00:13:20.630 3.176 - 3.200: 98.5291% ( 1) 00:13:20.630 3.200 - 3.224: 98.5367% ( 1) 00:13:20.630 3.295 - 3.319: 98.5519% ( 2) 00:13:20.630 3.319 - 3.342: 98.5672% ( 2) 00:13:20.630 3.342 - 3.366: 98.5824% ( 2) 00:13:20.630 3.366 - 3.390: 98.6205% ( 5) 00:13:20.630 3.390 - 3.413: 98.6358% ( 2) 00:13:20.630 3.413 - 3.437: 98.6434% ( 1) 00:13:20.630 3.484 - 3.508: 9[2024-07-24 17:56:06.806876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:20.630 8.6663% ( 3) 00:13:20.630 3.508 - 3.532: 98.6815% ( 2) 00:13:20.630 3.532 - 3.556: 98.6967% ( 2) 00:13:20.630 3.556 - 3.579: 98.7044% ( 1) 00:13:20.630 3.579 - 3.603: 98.7120% ( 1) 00:13:20.630 3.627 - 3.650: 98.7196% ( 1) 00:13:20.630 3.674 - 3.698: 98.7272% ( 1) 00:13:20.630 3.721 - 3.745: 98.7349% ( 1) 00:13:20.630 3.816 - 3.840: 98.7425% ( 1) 00:13:20.630 3.959 - 3.982: 98.7501% ( 1) 00:13:20.630 5.025 - 5.049: 98.7577% ( 1) 00:13:20.630 5.096 - 5.120: 98.7653% ( 1) 00:13:20.630 5.286 - 5.310: 98.7882% ( 3) 00:13:20.630 5.404 - 5.428: 98.7958% ( 1) 00:13:20.630 5.618 - 5.641: 98.8111% ( 2) 00:13:20.630 5.831 - 5.855: 98.8187% ( 1) 00:13:20.630 5.973 - 5.997: 98.8263% ( 1) 00:13:20.630 6.210 - 6.258: 98.8339% ( 1) 00:13:20.630 6.637 - 6.684: 98.8416% ( 1) 00:13:20.630 6.684 - 6.732: 98.8568% ( 2) 00:13:20.630 7.111 - 7.159: 98.8644% ( 1) 00:13:20.630 7.159 - 7.206: 98.8720% ( 1) 00:13:20.630 7.253 - 7.301: 98.8797% ( 1) 00:13:20.630 7.348 - 7.396: 98.8873% ( 1) 00:13:20.630 7.443 - 7.490: 98.8949% ( 1) 00:13:20.630 7.538 - 7.585: 98.9025% ( 1) 00:13:20.630 8.391 - 8.439: 98.9101% ( 1) 00:13:20.630 15.550 - 15.644: 98.9178% ( 1) 00:13:20.630 15.644 - 15.739: 98.9254% ( 1) 00:13:20.630 15.834 - 15.929: 98.9483% ( 3) 00:13:20.630 15.929 - 16.024: 98.9635% ( 2) 00:13:20.630 16.024 - 16.119: 98.9940% ( 4) 00:13:20.630 16.119 - 16.213: 99.0321% ( 5) 00:13:20.630 16.213 - 16.308: 99.0626% ( 4) 00:13:20.630 16.308 - 16.403: 99.1007% ( 5) 00:13:20.630 16.403 - 16.498: 99.1159% ( 2) 00:13:20.630 16.498 - 16.593: 99.1464% ( 4) 00:13:20.630 16.593 - 16.687: 99.1769% ( 4) 00:13:20.630 16.687 - 16.782: 99.2379% ( 8) 00:13:20.630 16.782 - 16.877: 99.2912% ( 7) 00:13:20.630 16.877 - 16.972: 99.3293% ( 5) 00:13:20.630 16.972 - 17.067: 99.3446% ( 2) 00:13:20.630 17.067 - 17.161: 99.3522% ( 1) 00:13:20.630 17.161 - 17.256: 99.3598% ( 1) 00:13:20.630 17.351 - 17.446: 99.3674% ( 1) 00:13:20.630 17.541 - 17.636: 99.3750% ( 1) 00:13:20.630 18.015 - 18.110: 99.3827% ( 1) 00:13:20.630 1013.381 - 1019.449: 99.3903% ( 1) 00:13:20.630 3980.705 - 4004.978: 99.8781% ( 64) 00:13:20.630 4004.978 - 4029.250: 100.0000% ( 16) 00:13:20.630 00:13:20.630 17:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:20.630 17:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:20.630 17:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:20.630 17:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:20.630 17:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:20.888 [ 00:13:20.888 { 00:13:20.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:20.888 "subtype": "Discovery", 00:13:20.888 "listen_addresses": [], 00:13:20.888 "allow_any_host": true, 00:13:20.888 "hosts": [] 00:13:20.888 }, 00:13:20.888 { 00:13:20.888 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:20.888 "subtype": "NVMe", 00:13:20.888 "listen_addresses": [ 00:13:20.888 { 00:13:20.888 "trtype": "VFIOUSER", 00:13:20.888 "adrfam": "IPv4", 00:13:20.888 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:20.888 "trsvcid": "0" 00:13:20.888 } 00:13:20.888 ], 00:13:20.888 "allow_any_host": true, 00:13:20.888 "hosts": [], 00:13:20.888 "serial_number": "SPDK1", 00:13:20.888 "model_number": "SPDK bdev Controller", 00:13:20.888 "max_namespaces": 32, 00:13:20.888 "min_cntlid": 1, 00:13:20.888 "max_cntlid": 65519, 00:13:20.888 "namespaces": [ 00:13:20.888 { 00:13:20.888 "nsid": 1, 00:13:20.888 "bdev_name": "Malloc1", 00:13:20.888 "name": "Malloc1", 00:13:20.888 "nguid": "52443612A6CC4555B60C824715D0E386", 00:13:20.888 "uuid": "52443612-a6cc-4555-b60c-824715d0e386" 00:13:20.888 } 00:13:20.888 ] 00:13:20.888 }, 00:13:20.888 { 00:13:20.889 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:20.889 "subtype": "NVMe", 00:13:20.889 "listen_addresses": [ 00:13:20.889 { 00:13:20.889 "trtype": "VFIOUSER", 00:13:20.889 "adrfam": "IPv4", 00:13:20.889 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:20.889 "trsvcid": "0" 00:13:20.889 } 00:13:20.889 ], 00:13:20.889 "allow_any_host": true, 00:13:20.889 "hosts": [], 00:13:20.889 "serial_number": "SPDK2", 00:13:20.889 "model_number": "SPDK bdev Controller", 00:13:20.889 "max_namespaces": 32, 00:13:20.889 "min_cntlid": 1, 00:13:20.889 "max_cntlid": 65519, 00:13:20.889 "namespaces": [ 00:13:20.889 { 00:13:20.889 "nsid": 1, 00:13:20.889 "bdev_name": "Malloc2", 00:13:20.889 "name": "Malloc2", 00:13:20.889 "nguid": "3FBB8EDB7D514C718EA505BDB6C1C475", 00:13:20.889 "uuid": "3fbb8edb-7d51-4c71-8ea5-05bdb6c1c475" 00:13:20.889 } 00:13:20.889 ] 00:13:20.889 } 00:13:20.889 ] 00:13:20.889 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:20.889 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2764532 00:13:20.889 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:20.889 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:20.889 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:13:21.147 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:21.147 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:21.147 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:13:21.147 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:21.147 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:21.147 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.147 [2024-07-24 17:56:07.309540] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:21.405 Malloc3 00:13:21.405 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:21.405 [2024-07-24 17:56:07.674222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:21.663 Asynchronous Event Request test 00:13:21.663 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:21.663 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:21.663 Registering asynchronous event callbacks... 00:13:21.663 Starting namespace attribute notice tests for all controllers... 00:13:21.663 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:21.663 aer_cb - Changed Namespace 00:13:21.663 Cleaning up... 00:13:21.663 [ 00:13:21.663 { 00:13:21.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:21.663 "subtype": "Discovery", 00:13:21.663 "listen_addresses": [], 00:13:21.663 "allow_any_host": true, 00:13:21.663 "hosts": [] 00:13:21.663 }, 00:13:21.663 { 00:13:21.663 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:21.663 "subtype": "NVMe", 00:13:21.663 "listen_addresses": [ 00:13:21.663 { 00:13:21.663 "trtype": "VFIOUSER", 00:13:21.663 "adrfam": "IPv4", 00:13:21.663 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:21.663 "trsvcid": "0" 00:13:21.663 } 00:13:21.663 ], 00:13:21.663 "allow_any_host": true, 00:13:21.663 "hosts": [], 00:13:21.663 "serial_number": "SPDK1", 00:13:21.663 "model_number": "SPDK bdev Controller", 00:13:21.663 "max_namespaces": 32, 00:13:21.663 "min_cntlid": 1, 00:13:21.663 "max_cntlid": 65519, 00:13:21.663 "namespaces": [ 00:13:21.663 { 00:13:21.663 "nsid": 1, 00:13:21.663 "bdev_name": "Malloc1", 00:13:21.663 "name": "Malloc1", 00:13:21.663 "nguid": "52443612A6CC4555B60C824715D0E386", 00:13:21.663 "uuid": "52443612-a6cc-4555-b60c-824715d0e386" 00:13:21.663 }, 00:13:21.663 { 00:13:21.663 "nsid": 2, 00:13:21.663 "bdev_name": "Malloc3", 00:13:21.663 "name": "Malloc3", 00:13:21.663 "nguid": "2EC91CCCBAC44F3DB0848B894B7E7847", 00:13:21.663 "uuid": "2ec91ccc-bac4-4f3d-b084-8b894b7e7847" 00:13:21.663 } 00:13:21.663 ] 00:13:21.663 }, 00:13:21.663 { 00:13:21.663 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:21.663 "subtype": "NVMe", 00:13:21.663 "listen_addresses": [ 00:13:21.663 { 00:13:21.663 "trtype": "VFIOUSER", 00:13:21.663 "adrfam": "IPv4", 00:13:21.663 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:21.663 "trsvcid": "0" 00:13:21.663 } 00:13:21.663 ], 00:13:21.663 "allow_any_host": true, 00:13:21.663 "hosts": [], 00:13:21.663 "serial_number": "SPDK2", 00:13:21.663 "model_number": "SPDK bdev Controller", 00:13:21.663 "max_namespaces": 32, 00:13:21.663 "min_cntlid": 1, 00:13:21.663 "max_cntlid": 65519, 00:13:21.663 "namespaces": [ 00:13:21.663 { 00:13:21.663 "nsid": 1, 00:13:21.663 "bdev_name": "Malloc2", 00:13:21.663 "name": "Malloc2", 00:13:21.663 "nguid": "3FBB8EDB7D514C718EA505BDB6C1C475", 00:13:21.663 "uuid": "3fbb8edb-7d51-4c71-8ea5-05bdb6c1c475" 00:13:21.663 } 00:13:21.663 ] 00:13:21.663 } 00:13:21.663 ] 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2764532 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:21.663 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:21.923 [2024-07-24 17:56:07.944663] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:13:21.923 [2024-07-24 17:56:07.944709] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764576 ] 00:13:21.923 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.923 [2024-07-24 17:56:07.977228] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:21.923 [2024-07-24 17:56:07.989226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:21.923 [2024-07-24 17:56:07.989257] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff9108b0000 00:13:21.923 [2024-07-24 17:56:07.990229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.991233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.992243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.993253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.994261] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.995267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.996271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.997272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:21.923 [2024-07-24 17:56:07.998286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:21.924 [2024-07-24 17:56:07.998308] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff9108a5000 00:13:21.924 [2024-07-24 17:56:07.999447] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:21.924 [2024-07-24 17:56:08.016180] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:21.924 [2024-07-24 17:56:08.016218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:21.924 [2024-07-24 17:56:08.018309] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:21.924 [2024-07-24 17:56:08.018364] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:21.924 [2024-07-24 17:56:08.018466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:21.924 [2024-07-24 17:56:08.018489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:21.924 [2024-07-24 17:56:08.018498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:21.924 [2024-07-24 17:56:08.019315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:21.924 [2024-07-24 17:56:08.019343] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:21.924 [2024-07-24 17:56:08.019357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:21.924 [2024-07-24 17:56:08.020319] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:21.924 [2024-07-24 17:56:08.020340] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:21.924 [2024-07-24 17:56:08.020353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.021327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:21.924 [2024-07-24 17:56:08.021348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.022335] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:21.924 [2024-07-24 17:56:08.022354] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:21.924 [2024-07-24 17:56:08.022364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.022375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.022485] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:21.924 [2024-07-24 17:56:08.022494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.022501] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:21.924 [2024-07-24 17:56:08.023350] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:21.924 [2024-07-24 17:56:08.024356] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:21.924 [2024-07-24 17:56:08.025363] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:21.924 [2024-07-24 17:56:08.026354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.924 [2024-07-24 17:56:08.026436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:21.924 [2024-07-24 17:56:08.027372] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:21.924 [2024-07-24 17:56:08.027405] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:21.924 [2024-07-24 17:56:08.027414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.027438] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:21.924 [2024-07-24 17:56:08.027455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.027476] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:21.924 [2024-07-24 17:56:08.027484] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:21.924 [2024-07-24 17:56:08.027491] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.924 [2024-07-24 17:56:08.027507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.034116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.034138] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:21.924 [2024-07-24 17:56:08.034147] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:21.924 [2024-07-24 17:56:08.034154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:21.924 [2024-07-24 17:56:08.034162] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:21.924 [2024-07-24 17:56:08.034170] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:21.924 [2024-07-24 17:56:08.034177] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:21.924 [2024-07-24 17:56:08.034185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.034197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.034217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.042151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.924 [2024-07-24 17:56:08.042166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.924 [2024-07-24 17:56:08.042178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.924 [2024-07-24 17:56:08.042193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.924 [2024-07-24 17:56:08.042203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.042218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.042233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.050114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.050132] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:21.924 [2024-07-24 17:56:08.050141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.050157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.050168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.050182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.058114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.058190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.058208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.058222] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:21.924 [2024-07-24 17:56:08.058230] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:21.924 [2024-07-24 17:56:08.058236] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.924 [2024-07-24 17:56:08.058246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.066129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.066162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:21.924 [2024-07-24 17:56:08.066178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.066192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:21.924 [2024-07-24 17:56:08.066205] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:21.924 [2024-07-24 17:56:08.066213] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:21.924 [2024-07-24 17:56:08.066219] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.924 [2024-07-24 17:56:08.066229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:21.924 [2024-07-24 17:56:08.074111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:21.924 [2024-07-24 17:56:08.074143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.074172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.074185] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:21.925 [2024-07-24 17:56:08.074193] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:21.925 [2024-07-24 17:56:08.074199] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.925 [2024-07-24 17:56:08.074209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.082114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.082135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082212] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:21.925 [2024-07-24 17:56:08.082220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:21.925 [2024-07-24 17:56:08.082228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:21.925 [2024-07-24 17:56:08.082251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.090111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.090138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.098114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.098139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.106111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.106136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.114126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.114157] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:21.925 [2024-07-24 17:56:08.114172] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:21.925 [2024-07-24 17:56:08.114179] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:21.925 [2024-07-24 17:56:08.114185] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:21.925 [2024-07-24 17:56:08.114190] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:21.925 [2024-07-24 17:56:08.114200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:21.925 [2024-07-24 17:56:08.114212] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:21.925 [2024-07-24 17:56:08.114221] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:21.925 [2024-07-24 17:56:08.114227] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.925 [2024-07-24 17:56:08.114235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.114246] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:21.925 [2024-07-24 17:56:08.114254] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:21.925 [2024-07-24 17:56:08.114260] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.925 [2024-07-24 17:56:08.114269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.114281] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:21.925 [2024-07-24 17:56:08.114289] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:21.925 [2024-07-24 17:56:08.114295] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:21.925 [2024-07-24 17:56:08.114303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:21.925 [2024-07-24 17:56:08.122124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.122152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.122170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:21.925 [2024-07-24 17:56:08.122182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:21.925 ===================================================== 00:13:21.925 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.925 ===================================================== 00:13:21.925 Controller Capabilities/Features 00:13:21.925 ================================ 00:13:21.925 Vendor ID: 4e58 00:13:21.925 Subsystem Vendor ID: 4e58 00:13:21.925 Serial Number: SPDK2 00:13:21.925 Model Number: SPDK bdev Controller 00:13:21.925 Firmware Version: 24.09 00:13:21.925 Recommended Arb Burst: 6 00:13:21.925 IEEE OUI Identifier: 8d 6b 50 00:13:21.925 Multi-path I/O 00:13:21.925 May have multiple subsystem ports: Yes 00:13:21.925 May have multiple controllers: Yes 00:13:21.925 Associated with SR-IOV VF: No 00:13:21.925 Max Data Transfer Size: 131072 00:13:21.925 Max Number of Namespaces: 32 00:13:21.925 Max Number of I/O Queues: 127 00:13:21.925 NVMe Specification Version (VS): 1.3 00:13:21.925 NVMe Specification Version (Identify): 1.3 00:13:21.925 Maximum Queue Entries: 256 00:13:21.925 Contiguous Queues Required: Yes 00:13:21.925 Arbitration Mechanisms Supported 00:13:21.925 Weighted Round Robin: Not Supported 00:13:21.925 Vendor Specific: Not Supported 00:13:21.925 Reset Timeout: 15000 ms 00:13:21.925 Doorbell Stride: 4 bytes 00:13:21.925 NVM Subsystem Reset: Not Supported 00:13:21.925 Command Sets Supported 00:13:21.925 NVM Command Set: Supported 00:13:21.925 Boot Partition: Not Supported 00:13:21.925 Memory Page Size Minimum: 4096 bytes 00:13:21.925 Memory Page Size Maximum: 4096 bytes 00:13:21.925 Persistent Memory Region: Not Supported 00:13:21.925 Optional Asynchronous Events Supported 00:13:21.925 Namespace Attribute Notices: Supported 00:13:21.925 Firmware Activation Notices: Not Supported 00:13:21.925 ANA Change Notices: Not Supported 00:13:21.925 PLE Aggregate Log Change Notices: Not Supported 00:13:21.925 LBA Status Info Alert Notices: Not Supported 00:13:21.925 EGE Aggregate Log Change Notices: Not Supported 00:13:21.925 Normal NVM Subsystem Shutdown event: Not Supported 00:13:21.925 Zone Descriptor Change Notices: Not Supported 00:13:21.925 Discovery Log Change Notices: Not Supported 00:13:21.925 Controller Attributes 00:13:21.925 128-bit Host Identifier: Supported 00:13:21.925 Non-Operational Permissive Mode: Not Supported 00:13:21.925 NVM Sets: Not Supported 00:13:21.925 Read Recovery Levels: Not Supported 00:13:21.925 Endurance Groups: Not Supported 00:13:21.925 Predictable Latency Mode: Not Supported 00:13:21.925 Traffic Based Keep ALive: Not Supported 00:13:21.925 Namespace Granularity: Not Supported 00:13:21.925 SQ Associations: Not Supported 00:13:21.925 UUID List: Not Supported 00:13:21.925 Multi-Domain Subsystem: Not Supported 00:13:21.925 Fixed Capacity Management: Not Supported 00:13:21.925 Variable Capacity Management: Not Supported 00:13:21.925 Delete Endurance Group: Not Supported 00:13:21.925 Delete NVM Set: Not Supported 00:13:21.925 Extended LBA Formats Supported: Not Supported 00:13:21.925 Flexible Data Placement Supported: Not Supported 00:13:21.925 00:13:21.925 Controller Memory Buffer Support 00:13:21.925 ================================ 00:13:21.925 Supported: No 00:13:21.925 00:13:21.925 Persistent Memory Region Support 00:13:21.925 ================================ 00:13:21.925 Supported: No 00:13:21.925 00:13:21.925 Admin Command Set Attributes 00:13:21.925 ============================ 00:13:21.925 Security Send/Receive: Not Supported 00:13:21.925 Format NVM: Not Supported 00:13:21.925 Firmware Activate/Download: Not Supported 00:13:21.925 Namespace Management: Not Supported 00:13:21.925 Device Self-Test: Not Supported 00:13:21.925 Directives: Not Supported 00:13:21.925 NVMe-MI: Not Supported 00:13:21.925 Virtualization Management: Not Supported 00:13:21.925 Doorbell Buffer Config: Not Supported 00:13:21.925 Get LBA Status Capability: Not Supported 00:13:21.925 Command & Feature Lockdown Capability: Not Supported 00:13:21.925 Abort Command Limit: 4 00:13:21.926 Async Event Request Limit: 4 00:13:21.926 Number of Firmware Slots: N/A 00:13:21.926 Firmware Slot 1 Read-Only: N/A 00:13:21.926 Firmware Activation Without Reset: N/A 00:13:21.926 Multiple Update Detection Support: N/A 00:13:21.926 Firmware Update Granularity: No Information Provided 00:13:21.926 Per-Namespace SMART Log: No 00:13:21.926 Asymmetric Namespace Access Log Page: Not Supported 00:13:21.926 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:21.926 Command Effects Log Page: Supported 00:13:21.926 Get Log Page Extended Data: Supported 00:13:21.926 Telemetry Log Pages: Not Supported 00:13:21.926 Persistent Event Log Pages: Not Supported 00:13:21.926 Supported Log Pages Log Page: May Support 00:13:21.926 Commands Supported & Effects Log Page: Not Supported 00:13:21.926 Feature Identifiers & Effects Log Page:May Support 00:13:21.926 NVMe-MI Commands & Effects Log Page: May Support 00:13:21.926 Data Area 4 for Telemetry Log: Not Supported 00:13:21.926 Error Log Page Entries Supported: 128 00:13:21.926 Keep Alive: Supported 00:13:21.926 Keep Alive Granularity: 10000 ms 00:13:21.926 00:13:21.926 NVM Command Set Attributes 00:13:21.926 ========================== 00:13:21.926 Submission Queue Entry Size 00:13:21.926 Max: 64 00:13:21.926 Min: 64 00:13:21.926 Completion Queue Entry Size 00:13:21.926 Max: 16 00:13:21.926 Min: 16 00:13:21.926 Number of Namespaces: 32 00:13:21.926 Compare Command: Supported 00:13:21.926 Write Uncorrectable Command: Not Supported 00:13:21.926 Dataset Management Command: Supported 00:13:21.926 Write Zeroes Command: Supported 00:13:21.926 Set Features Save Field: Not Supported 00:13:21.926 Reservations: Not Supported 00:13:21.926 Timestamp: Not Supported 00:13:21.926 Copy: Supported 00:13:21.926 Volatile Write Cache: Present 00:13:21.926 Atomic Write Unit (Normal): 1 00:13:21.926 Atomic Write Unit (PFail): 1 00:13:21.926 Atomic Compare & Write Unit: 1 00:13:21.926 Fused Compare & Write: Supported 00:13:21.926 Scatter-Gather List 00:13:21.926 SGL Command Set: Supported (Dword aligned) 00:13:21.926 SGL Keyed: Not Supported 00:13:21.926 SGL Bit Bucket Descriptor: Not Supported 00:13:21.926 SGL Metadata Pointer: Not Supported 00:13:21.926 Oversized SGL: Not Supported 00:13:21.926 SGL Metadata Address: Not Supported 00:13:21.926 SGL Offset: Not Supported 00:13:21.926 Transport SGL Data Block: Not Supported 00:13:21.926 Replay Protected Memory Block: Not Supported 00:13:21.926 00:13:21.926 Firmware Slot Information 00:13:21.926 ========================= 00:13:21.926 Active slot: 1 00:13:21.926 Slot 1 Firmware Revision: 24.09 00:13:21.926 00:13:21.926 00:13:21.926 Commands Supported and Effects 00:13:21.926 ============================== 00:13:21.926 Admin Commands 00:13:21.926 -------------- 00:13:21.926 Get Log Page (02h): Supported 00:13:21.926 Identify (06h): Supported 00:13:21.926 Abort (08h): Supported 00:13:21.926 Set Features (09h): Supported 00:13:21.926 Get Features (0Ah): Supported 00:13:21.926 Asynchronous Event Request (0Ch): Supported 00:13:21.926 Keep Alive (18h): Supported 00:13:21.926 I/O Commands 00:13:21.926 ------------ 00:13:21.926 Flush (00h): Supported LBA-Change 00:13:21.926 Write (01h): Supported LBA-Change 00:13:21.926 Read (02h): Supported 00:13:21.926 Compare (05h): Supported 00:13:21.926 Write Zeroes (08h): Supported LBA-Change 00:13:21.926 Dataset Management (09h): Supported LBA-Change 00:13:21.926 Copy (19h): Supported LBA-Change 00:13:21.926 00:13:21.926 Error Log 00:13:21.926 ========= 00:13:21.926 00:13:21.926 Arbitration 00:13:21.926 =========== 00:13:21.926 Arbitration Burst: 1 00:13:21.926 00:13:21.926 Power Management 00:13:21.926 ================ 00:13:21.926 Number of Power States: 1 00:13:21.926 Current Power State: Power State #0 00:13:21.926 Power State #0: 00:13:21.926 Max Power: 0.00 W 00:13:21.926 Non-Operational State: Operational 00:13:21.926 Entry Latency: Not Reported 00:13:21.926 Exit Latency: Not Reported 00:13:21.926 Relative Read Throughput: 0 00:13:21.926 Relative Read Latency: 0 00:13:21.926 Relative Write Throughput: 0 00:13:21.926 Relative Write Latency: 0 00:13:21.926 Idle Power: Not Reported 00:13:21.926 Active Power: Not Reported 00:13:21.926 Non-Operational Permissive Mode: Not Supported 00:13:21.926 00:13:21.926 Health Information 00:13:21.926 ================== 00:13:21.926 Critical Warnings: 00:13:21.926 Available Spare Space: OK 00:13:21.926 Temperature: OK 00:13:21.926 Device Reliability: OK 00:13:21.926 Read Only: No 00:13:21.926 Volatile Memory Backup: OK 00:13:21.926 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:21.926 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:21.926 Available Spare: 0% 00:13:21.926 Available Sp[2024-07-24 17:56:08.122298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:21.926 [2024-07-24 17:56:08.130111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:21.926 [2024-07-24 17:56:08.130158] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:21.926 [2024-07-24 17:56:08.130175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.926 [2024-07-24 17:56:08.130186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.926 [2024-07-24 17:56:08.130196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.926 [2024-07-24 17:56:08.130206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.926 [2024-07-24 17:56:08.130293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:21.926 [2024-07-24 17:56:08.130318] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:21.926 [2024-07-24 17:56:08.131300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.926 [2024-07-24 17:56:08.131371] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:21.926 [2024-07-24 17:56:08.131385] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:21.926 [2024-07-24 17:56:08.134112] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:21.926 [2024-07-24 17:56:08.134137] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:13:21.926 [2024-07-24 17:56:08.134188] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:21.926 [2024-07-24 17:56:08.135375] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:21.926 are Threshold: 0% 00:13:21.926 Life Percentage Used: 0% 00:13:21.926 Data Units Read: 0 00:13:21.926 Data Units Written: 0 00:13:21.926 Host Read Commands: 0 00:13:21.926 Host Write Commands: 0 00:13:21.926 Controller Busy Time: 0 minutes 00:13:21.926 Power Cycles: 0 00:13:21.926 Power On Hours: 0 hours 00:13:21.926 Unsafe Shutdowns: 0 00:13:21.926 Unrecoverable Media Errors: 0 00:13:21.926 Lifetime Error Log Entries: 0 00:13:21.926 Warning Temperature Time: 0 minutes 00:13:21.926 Critical Temperature Time: 0 minutes 00:13:21.926 00:13:21.926 Number of Queues 00:13:21.926 ================ 00:13:21.926 Number of I/O Submission Queues: 127 00:13:21.926 Number of I/O Completion Queues: 127 00:13:21.926 00:13:21.926 Active Namespaces 00:13:21.926 ================= 00:13:21.926 Namespace ID:1 00:13:21.926 Error Recovery Timeout: Unlimited 00:13:21.926 Command Set Identifier: NVM (00h) 00:13:21.926 Deallocate: Supported 00:13:21.926 Deallocated/Unwritten Error: Not Supported 00:13:21.926 Deallocated Read Value: Unknown 00:13:21.926 Deallocate in Write Zeroes: Not Supported 00:13:21.926 Deallocated Guard Field: 0xFFFF 00:13:21.926 Flush: Supported 00:13:21.926 Reservation: Supported 00:13:21.926 Namespace Sharing Capabilities: Multiple Controllers 00:13:21.926 Size (in LBAs): 131072 (0GiB) 00:13:21.926 Capacity (in LBAs): 131072 (0GiB) 00:13:21.926 Utilization (in LBAs): 131072 (0GiB) 00:13:21.926 NGUID: 3FBB8EDB7D514C718EA505BDB6C1C475 00:13:21.926 UUID: 3fbb8edb-7d51-4c71-8ea5-05bdb6c1c475 00:13:21.926 Thin Provisioning: Not Supported 00:13:21.926 Per-NS Atomic Units: Yes 00:13:21.926 Atomic Boundary Size (Normal): 0 00:13:21.926 Atomic Boundary Size (PFail): 0 00:13:21.926 Atomic Boundary Offset: 0 00:13:21.926 Maximum Single Source Range Length: 65535 00:13:21.926 Maximum Copy Length: 65535 00:13:21.926 Maximum Source Range Count: 1 00:13:21.926 NGUID/EUI64 Never Reused: No 00:13:21.926 Namespace Write Protected: No 00:13:21.926 Number of LBA Formats: 1 00:13:21.927 Current LBA Format: LBA Format #00 00:13:21.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:21.927 00:13:21.927 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:22.184 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.184 [2024-07-24 17:56:08.375956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:27.446 Initializing NVMe Controllers 00:13:27.446 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:27.446 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:27.446 Initialization complete. Launching workers. 00:13:27.446 ======================================================== 00:13:27.446 Latency(us) 00:13:27.446 Device Information : IOPS MiB/s Average min max 00:13:27.446 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33689.74 131.60 3798.40 1190.81 8297.79 00:13:27.446 ======================================================== 00:13:27.446 Total : 33689.74 131.60 3798.40 1190.81 8297.79 00:13:27.446 00:13:27.446 [2024-07-24 17:56:13.480450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:27.446 17:56:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:27.446 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.703 [2024-07-24 17:56:13.717142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.005 Initializing NVMe Controllers 00:13:33.005 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.005 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:33.005 Initialization complete. Launching workers. 00:13:33.005 ======================================================== 00:13:33.005 Latency(us) 00:13:33.005 Device Information : IOPS MiB/s Average min max 00:13:33.005 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32016.00 125.06 4000.07 1222.90 8986.88 00:13:33.005 ======================================================== 00:13:33.005 Total : 32016.00 125.06 4000.07 1222.90 8986.88 00:13:33.005 00:13:33.005 [2024-07-24 17:56:18.739034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.005 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:33.005 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.005 [2024-07-24 17:56:18.949935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:38.272 [2024-07-24 17:56:24.094226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:38.272 Initializing NVMe Controllers 00:13:38.272 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:38.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:38.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:38.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:38.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:38.272 Initialization complete. Launching workers. 00:13:38.272 Starting thread on core 2 00:13:38.272 Starting thread on core 3 00:13:38.272 Starting thread on core 1 00:13:38.272 17:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:38.272 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.272 [2024-07-24 17:56:24.405590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.557 [2024-07-24 17:56:27.485566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.557 Initializing NVMe Controllers 00:13:41.557 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:41.557 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:41.557 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:41.557 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:41.557 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:41.557 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:41.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:41.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:41.558 Initialization complete. Launching workers. 00:13:41.558 Starting thread on core 1 with urgent priority queue 00:13:41.558 Starting thread on core 2 with urgent priority queue 00:13:41.558 Starting thread on core 3 with urgent priority queue 00:13:41.558 Starting thread on core 0 with urgent priority queue 00:13:41.558 SPDK bdev Controller (SPDK2 ) core 0: 5216.67 IO/s 19.17 secs/100000 ios 00:13:41.558 SPDK bdev Controller (SPDK2 ) core 1: 5565.00 IO/s 17.97 secs/100000 ios 00:13:41.558 SPDK bdev Controller (SPDK2 ) core 2: 5516.00 IO/s 18.13 secs/100000 ios 00:13:41.558 SPDK bdev Controller (SPDK2 ) core 3: 5575.33 IO/s 17.94 secs/100000 ios 00:13:41.558 ======================================================== 00:13:41.558 00:13:41.558 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:41.558 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.558 [2024-07-24 17:56:27.789698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.558 Initializing NVMe Controllers 00:13:41.558 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:41.558 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:41.558 Namespace ID: 1 size: 0GB 00:13:41.558 Initialization complete. 00:13:41.558 INFO: using host memory buffer for IO 00:13:41.558 Hello world! 00:13:41.558 [2024-07-24 17:56:27.802831] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.815 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:41.815 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.073 [2024-07-24 17:56:28.091126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:43.007 Initializing NVMe Controllers 00:13:43.007 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.007 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:43.007 Initialization complete. Launching workers. 00:13:43.007 submit (in ns) avg, min, max = 8134.3, 3548.9, 4015338.9 00:13:43.007 complete (in ns) avg, min, max = 26011.3, 2060.0, 6010272.2 00:13:43.007 00:13:43.007 Submit histogram 00:13:43.007 ================ 00:13:43.007 Range in us Cumulative Count 00:13:43.007 3.532 - 3.556: 0.0075% ( 1) 00:13:43.007 3.556 - 3.579: 0.3395% ( 44) 00:13:43.007 3.579 - 3.603: 3.8778% ( 469) 00:13:43.007 3.603 - 3.627: 11.3089% ( 985) 00:13:43.007 3.627 - 3.650: 22.8895% ( 1535) 00:13:43.007 3.650 - 3.674: 34.6963% ( 1565) 00:13:43.007 3.674 - 3.698: 43.5081% ( 1168) 00:13:43.007 3.698 - 3.721: 50.9091% ( 981) 00:13:43.007 3.721 - 3.745: 56.2806% ( 712) 00:13:43.007 3.745 - 3.769: 60.9883% ( 624) 00:13:43.007 3.769 - 3.793: 65.4696% ( 594) 00:13:43.007 3.793 - 3.816: 68.8721% ( 451) 00:13:43.007 3.816 - 3.840: 71.5654% ( 357) 00:13:43.007 3.840 - 3.864: 75.0207% ( 458) 00:13:43.007 3.864 - 3.887: 78.8684% ( 510) 00:13:43.007 3.887 - 3.911: 82.9272% ( 538) 00:13:43.007 3.911 - 3.935: 86.0279% ( 411) 00:13:43.007 3.935 - 3.959: 87.9442% ( 254) 00:13:43.007 3.959 - 3.982: 89.6718% ( 229) 00:13:43.007 3.982 - 4.006: 91.5353% ( 247) 00:13:43.007 4.006 - 4.030: 92.7952% ( 167) 00:13:43.007 4.030 - 4.053: 93.9947% ( 159) 00:13:43.007 4.053 - 4.077: 94.7944% ( 106) 00:13:43.007 4.077 - 4.101: 95.5413% ( 99) 00:13:43.007 4.101 - 4.124: 96.1524% ( 81) 00:13:43.007 4.124 - 4.148: 96.5070% ( 47) 00:13:43.007 4.148 - 4.172: 96.7559% ( 33) 00:13:43.007 4.172 - 4.196: 96.8917% ( 18) 00:13:43.007 4.196 - 4.219: 96.9898% ( 13) 00:13:43.007 4.219 - 4.243: 97.0803% ( 12) 00:13:43.007 4.243 - 4.267: 97.1709% ( 12) 00:13:43.007 4.267 - 4.290: 97.2161% ( 6) 00:13:43.007 4.290 - 4.314: 97.3067% ( 12) 00:13:43.007 4.314 - 4.338: 97.3972% ( 12) 00:13:43.007 4.338 - 4.361: 97.5028% ( 14) 00:13:43.007 4.361 - 4.385: 97.5556% ( 7) 00:13:43.007 4.385 - 4.409: 97.5934% ( 5) 00:13:43.007 4.409 - 4.433: 97.6235% ( 4) 00:13:43.007 4.433 - 4.456: 97.6311% ( 1) 00:13:43.007 4.456 - 4.480: 97.6688% ( 5) 00:13:43.007 4.480 - 4.504: 97.6839% ( 2) 00:13:43.007 4.504 - 4.527: 97.6914% ( 1) 00:13:43.007 4.527 - 4.551: 97.6990% ( 1) 00:13:43.007 4.575 - 4.599: 97.7065% ( 1) 00:13:43.007 4.599 - 4.622: 97.7292% ( 3) 00:13:43.007 4.622 - 4.646: 97.7367% ( 1) 00:13:43.007 4.670 - 4.693: 97.7442% ( 1) 00:13:43.007 4.693 - 4.717: 97.7518% ( 1) 00:13:43.007 4.717 - 4.741: 97.7669% ( 2) 00:13:43.007 4.741 - 4.764: 97.8197% ( 7) 00:13:43.007 4.764 - 4.788: 97.8650% ( 6) 00:13:43.007 4.788 - 4.812: 97.8725% ( 1) 00:13:43.007 4.812 - 4.836: 97.8951% ( 3) 00:13:43.007 4.836 - 4.859: 97.9555% ( 8) 00:13:43.007 4.859 - 4.883: 98.0083% ( 7) 00:13:43.007 4.883 - 4.907: 98.0158% ( 1) 00:13:43.007 4.907 - 4.930: 98.0611% ( 6) 00:13:43.007 4.930 - 4.954: 98.0988% ( 5) 00:13:43.007 4.954 - 4.978: 98.1215% ( 3) 00:13:43.007 4.978 - 5.001: 98.1667% ( 6) 00:13:43.007 5.001 - 5.025: 98.2120% ( 6) 00:13:43.007 5.025 - 5.049: 98.2497% ( 5) 00:13:43.007 5.049 - 5.073: 98.2874% ( 5) 00:13:43.007 5.073 - 5.096: 98.3025% ( 2) 00:13:43.007 5.096 - 5.120: 98.3252% ( 3) 00:13:43.007 5.120 - 5.144: 98.3629% ( 5) 00:13:43.007 5.167 - 5.191: 98.3780% ( 2) 00:13:43.007 5.191 - 5.215: 98.4006% ( 3) 00:13:43.007 5.215 - 5.239: 98.4157% ( 2) 00:13:43.007 5.286 - 5.310: 98.4232% ( 1) 00:13:43.007 5.333 - 5.357: 98.4308% ( 1) 00:13:43.007 5.404 - 5.428: 98.4459% ( 2) 00:13:43.007 5.428 - 5.452: 98.4534% ( 1) 00:13:43.007 5.523 - 5.547: 98.4610% ( 1) 00:13:43.007 5.547 - 5.570: 98.4685% ( 1) 00:13:43.007 5.570 - 5.594: 98.4760% ( 1) 00:13:43.007 5.784 - 5.807: 98.4836% ( 1) 00:13:43.007 5.879 - 5.902: 98.5062% ( 3) 00:13:43.007 5.926 - 5.950: 98.5138% ( 1) 00:13:43.007 5.950 - 5.973: 98.5364% ( 3) 00:13:43.007 5.973 - 5.997: 98.5439% ( 1) 00:13:43.007 5.997 - 6.021: 98.5515% ( 1) 00:13:43.007 6.068 - 6.116: 98.5590% ( 1) 00:13:43.007 6.163 - 6.210: 98.5666% ( 1) 00:13:43.007 6.258 - 6.305: 98.5817% ( 2) 00:13:43.007 6.305 - 6.353: 98.5892% ( 1) 00:13:43.007 6.590 - 6.637: 98.5968% ( 1) 00:13:43.007 6.637 - 6.684: 98.6118% ( 2) 00:13:43.007 6.684 - 6.732: 98.6194% ( 1) 00:13:43.007 6.779 - 6.827: 98.6269% ( 1) 00:13:43.007 6.827 - 6.874: 98.6345% ( 1) 00:13:43.007 6.921 - 6.969: 98.6496% ( 2) 00:13:43.007 6.969 - 7.016: 98.6722% ( 3) 00:13:43.007 7.016 - 7.064: 98.6797% ( 1) 00:13:43.007 7.064 - 7.111: 98.6873% ( 1) 00:13:43.007 7.206 - 7.253: 98.6948% ( 1) 00:13:43.007 7.253 - 7.301: 98.7099% ( 2) 00:13:43.007 7.301 - 7.348: 98.7175% ( 1) 00:13:43.007 7.348 - 7.396: 98.7250% ( 1) 00:13:43.007 7.396 - 7.443: 98.7326% ( 1) 00:13:43.007 7.538 - 7.585: 98.7401% ( 1) 00:13:43.007 7.585 - 7.633: 98.7703% ( 4) 00:13:43.007 7.633 - 7.680: 98.7778% ( 1) 00:13:43.007 7.680 - 7.727: 98.7854% ( 1) 00:13:43.007 7.727 - 7.775: 98.7929% ( 1) 00:13:43.007 7.775 - 7.822: 98.8005% ( 1) 00:13:43.007 7.870 - 7.917: 98.8231% ( 3) 00:13:43.007 7.917 - 7.964: 98.8457% ( 3) 00:13:43.007 7.964 - 8.012: 98.8533% ( 1) 00:13:43.007 8.012 - 8.059: 98.8608% ( 1) 00:13:43.007 8.059 - 8.107: 98.8684% ( 1) 00:13:43.007 8.249 - 8.296: 98.8759% ( 1) 00:13:43.007 8.296 - 8.344: 98.8834% ( 1) 00:13:43.007 8.344 - 8.391: 98.8985% ( 2) 00:13:43.007 8.391 - 8.439: 98.9061% ( 1) 00:13:43.007 8.439 - 8.486: 98.9212% ( 2) 00:13:43.007 8.486 - 8.533: 98.9287% ( 1) 00:13:43.007 8.628 - 8.676: 98.9438% ( 2) 00:13:43.007 8.676 - 8.723: 98.9513% ( 1) 00:13:43.007 8.723 - 8.770: 98.9589% ( 1) 00:13:43.007 8.770 - 8.818: 98.9664% ( 1) 00:13:43.007 8.818 - 8.865: 98.9740% ( 1) 00:13:43.007 9.102 - 9.150: 98.9815% ( 1) 00:13:43.007 9.434 - 9.481: 98.9891% ( 1) 00:13:43.007 10.193 - 10.240: 99.0041% ( 2) 00:13:43.007 10.287 - 10.335: 99.0117% ( 1) 00:13:43.007 10.335 - 10.382: 99.0192% ( 1) 00:13:43.007 10.761 - 10.809: 99.0268% ( 1) 00:13:43.008 10.999 - 11.046: 99.0343% ( 1) 00:13:43.008 11.188 - 11.236: 99.0419% ( 1) 00:13:43.008 11.520 - 11.567: 99.0494% ( 1) 00:13:43.008 11.804 - 11.852: 99.0570% ( 1) 00:13:43.008 11.852 - 11.899: 99.0645% ( 1) 00:13:43.008 12.136 - 12.231: 99.0720% ( 1) 00:13:43.008 12.326 - 12.421: 99.0796% ( 1) 00:13:43.008 12.421 - 12.516: 99.0871% ( 1) 00:13:43.008 12.610 - 12.705: 99.0947% ( 1) 00:13:43.008 12.800 - 12.895: 99.1022% ( 1) 00:13:43.008 12.895 - 12.990: 99.1098% ( 1) 00:13:43.008 13.084 - 13.179: 99.1173% ( 1) 00:13:43.008 13.179 - 13.274: 99.1324% ( 2) 00:13:43.008 13.369 - 13.464: 99.1399% ( 1) 00:13:43.008 13.559 - 13.653: 99.1475% ( 1) 00:13:43.008 13.653 - 13.748: 99.1626% ( 2) 00:13:43.008 13.843 - 13.938: 99.1701% ( 1) 00:13:43.008 13.938 - 14.033: 99.1777% ( 1) 00:13:43.008 14.033 - 14.127: 99.1852% ( 1) 00:13:43.008 14.127 - 14.222: 99.1928% ( 1) 00:13:43.008 14.317 - 14.412: 99.2003% ( 1) 00:13:43.008 14.791 - 14.886: 99.2078% ( 1) 00:13:43.008 15.076 - 15.170: 99.2154% ( 1) 00:13:43.008 16.972 - 17.067: 99.2229% ( 1) 00:13:43.008 17.161 - 17.256: 99.2380% ( 2) 00:13:43.008 17.351 - 17.446: 99.2531% ( 2) 00:13:43.008 17.446 - 17.541: 99.2757% ( 3) 00:13:43.008 17.541 - 17.636: 99.2908% ( 2) 00:13:43.008 17.636 - 17.730: 99.3286% ( 5) 00:13:43.008 17.730 - 17.825: 99.3436% ( 2) 00:13:43.008 17.920 - 18.015: 99.4040% ( 8) 00:13:43.008 18.015 - 18.110: 99.4266% ( 3) 00:13:43.008 18.110 - 18.204: 99.4568% ( 4) 00:13:43.008 18.204 - 18.299: 99.5096% ( 7) 00:13:43.008 18.299 - 18.394: 99.5775% ( 9) 00:13:43.008 18.394 - 18.489: 99.6379% ( 8) 00:13:43.008 18.489 - 18.584: 99.6831% ( 6) 00:13:43.008 18.584 - 18.679: 99.6982% ( 2) 00:13:43.008 18.679 - 18.773: 99.7435% ( 6) 00:13:43.008 18.773 - 18.868: 99.7586% ( 2) 00:13:43.008 18.868 - 18.963: 99.7661% ( 1) 00:13:43.008 18.963 - 19.058: 99.7888% ( 3) 00:13:43.008 19.058 - 19.153: 99.7963% ( 1) 00:13:43.008 19.342 - 19.437: 99.8189% ( 3) 00:13:43.008 19.532 - 19.627: 99.8340% ( 2) 00:13:43.008 19.627 - 19.721: 99.8416% ( 1) 00:13:43.008 19.721 - 19.816: 99.8491% ( 1) 00:13:43.008 20.764 - 20.859: 99.8567% ( 1) 00:13:43.008 21.333 - 21.428: 99.8642% ( 1) 00:13:43.008 22.376 - 22.471: 99.8717% ( 1) 00:13:43.008 22.850 - 22.945: 99.8793% ( 1) 00:13:43.008 25.031 - 25.221: 99.8868% ( 1) 00:13:43.008 28.444 - 28.634: 99.8944% ( 1) 00:13:43.008 3980.705 - 4004.978: 99.9774% ( 11) 00:13:43.008 4004.978 - 4029.250: 100.0000% ( 3) 00:13:43.008 00:13:43.008 Complete histogram 00:13:43.008 ================== 00:13:43.008 Range in us Cumulative Count 00:13:43.008 2.050 - 2.062: 0.1433% ( 19) 00:13:43.008 2.062 - 2.074: 19.1626% ( 2521) 00:13:43.008 2.074 - 2.086: 42.7914% ( 3132) 00:13:43.008 2.086 - 2.098: 44.9491% ( 286) 00:13:43.008 2.098 - 2.110: 55.9713% ( 1461) 00:13:43.008 2.110 - 2.121: 61.6899% ( 758) 00:13:43.008 2.121 - 2.133: 64.0966% ( 319) 00:13:43.008 2.133 - 2.145: 73.9268% ( 1303) 00:13:43.008 2.145 - 2.157: 79.6002% ( 752) 00:13:43.008 2.157 - 2.169: 80.7544% ( 153) 00:13:43.008 2.169 - 2.181: 85.8091% ( 670) 00:13:43.008 2.181 - 2.193: 88.0347% ( 295) 00:13:43.008 2.193 - 2.204: 88.8495% ( 108) 00:13:43.008 2.204 - 2.216: 90.5621% ( 227) 00:13:43.008 2.216 - 2.228: 92.5236% ( 260) 00:13:43.008 2.228 - 2.240: 93.9193% ( 185) 00:13:43.008 2.240 - 2.252: 94.6888% ( 102) 00:13:43.008 2.252 - 2.264: 94.9679% ( 37) 00:13:43.008 2.264 - 2.276: 95.1037% ( 18) 00:13:43.008 2.276 - 2.287: 95.2018% ( 13) 00:13:43.008 2.287 - 2.299: 95.5413% ( 45) 00:13:43.008 2.299 - 2.311: 95.7978% ( 34) 00:13:43.008 2.311 - 2.323: 95.8657% ( 9) 00:13:43.008 2.323 - 2.335: 95.9261% ( 8) 00:13:43.008 2.335 - 2.347: 95.9638% ( 5) 00:13:43.008 2.347 - 2.359: 96.1826% ( 29) 00:13:43.008 2.359 - 2.370: 96.5296% ( 46) 00:13:43.008 2.370 - 2.382: 96.9144% ( 51) 00:13:43.008 2.382 - 2.394: 97.2237% ( 41) 00:13:43.008 2.394 - 2.406: 97.5406% ( 42) 00:13:43.008 2.406 - 2.418: 97.7820% ( 32) 00:13:43.008 2.418 - 2.430: 98.0008% ( 29) 00:13:43.008 2.430 - 2.441: 98.1441% ( 19) 00:13:43.008 2.441 - 2.453: 98.1969% ( 7) 00:13:43.008 2.453 - 2.465: 98.2573% ( 8) 00:13:43.008 2.465 - 2.477: 98.3025% ( 6) 00:13:43.008 2.477 - 2.489: 98.3478% ( 6) 00:13:43.008 2.489 - 2.501: 98.3704% ( 3) 00:13:43.008 2.501 - 2.513: 98.3780% ( 1) 00:13:43.008 2.536 - 2.548: 98.3855% ( 1) 00:13:43.008 2.548 - 2.560: 98.4081% ( 3) 00:13:43.008 2.560 - 2.572: 98.4157% ( 1) 00:13:43.008 2.619 - 2.631: 98.4232% ( 1) 00:13:43.008 2.631 - 2.643: 98.4308% ( 1) 00:13:43.008 2.833 - 2.844: 98.4383% ( 1) 00:13:43.008 3.224 - 3.247: 98.4459% ( 1) 00:13:43.008 3.247 - 3.271: 98.4534% ( 1) 00:13:43.008 3.271 - 3.295: 98.4610% ( 1) 00:13:43.008 3.295 - 3.319: 98.4836% ( 3) 00:13:43.008 3.319 - 3.342: 98.4987% ( 2) 00:13:43.008 3.342 - 3.366: 98.5138% ( 2) 00:13:43.008 3.366 - 3.390: 98.5213% ( 1) 00:13:43.008 3.390 - 3.413: 98.5289% ( 1) 00:13:43.008 3.413 - 3.437: 98.5364% ( 1) 00:13:43.008 3.437 - 3.461: 98.5439% ( 1) 00:13:43.008 3.461 - 3.484: 98.5590% ( 2) 00:13:43.008 3.508 - 3.532: 98.5666% ( 1) 00:13:43.008 3.532 - 3.556: 98.5741% ( 1) 00:13:43.008 3.579 - 3.603: 98.5817% ( 1) 00:13:43.008 3.603 - 3.627: 98.5892% ( 1) 00:13:43.008 3.627 - 3.650: 98.5968% ( 1) 00:13:43.008 3.674 - 3.698: 98.6043% ( 1) 00:13:43.008 3.864 - 3.887: 98.6118% ( 1) 00:13:43.008 4.030 - 4.053: 98.6194% ( 1) 00:13:43.008 4.836 - 4.859: 98.6269% ( 1) 00:13:43.008 5.096 - 5.120: 98.6345% ( 1) 00:13:43.008 5.523 - 5.547: 98.6420% ( 1) 00:13:43.008 5.618 - 5.641: 98.6571% ( 2) 00:13:43.008 5.665 - 5.689: 9[2024-07-24 17:56:29.195871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:43.008 8.6647% ( 1) 00:13:43.008 5.689 - 5.713: 98.6722% ( 1) 00:13:43.008 5.736 - 5.760: 98.6873% ( 2) 00:13:43.008 5.760 - 5.784: 98.6948% ( 1) 00:13:43.008 5.831 - 5.855: 98.7024% ( 1) 00:13:43.008 5.879 - 5.902: 98.7099% ( 1) 00:13:43.008 5.926 - 5.950: 98.7175% ( 1) 00:13:43.008 5.973 - 5.997: 98.7250% ( 1) 00:13:43.008 6.116 - 6.163: 98.7326% ( 1) 00:13:43.008 6.258 - 6.305: 98.7476% ( 2) 00:13:43.008 6.353 - 6.400: 98.7552% ( 1) 00:13:43.008 6.684 - 6.732: 98.7627% ( 1) 00:13:43.008 6.827 - 6.874: 98.7703% ( 1) 00:13:43.008 6.921 - 6.969: 98.7854% ( 2) 00:13:43.008 7.016 - 7.064: 98.7929% ( 1) 00:13:43.008 7.206 - 7.253: 98.8005% ( 1) 00:13:43.008 8.201 - 8.249: 98.8080% ( 1) 00:13:43.008 15.170 - 15.265: 98.8155% ( 1) 00:13:43.008 15.739 - 15.834: 98.8382% ( 3) 00:13:43.008 15.834 - 15.929: 98.8759% ( 5) 00:13:43.008 15.929 - 16.024: 98.8910% ( 2) 00:13:43.008 16.024 - 16.119: 98.9136% ( 3) 00:13:43.008 16.119 - 16.213: 98.9363% ( 3) 00:13:43.008 16.213 - 16.308: 98.9966% ( 8) 00:13:43.008 16.308 - 16.403: 99.0268% ( 4) 00:13:43.008 16.403 - 16.498: 99.0343% ( 1) 00:13:43.008 16.498 - 16.593: 99.0570% ( 3) 00:13:43.008 16.593 - 16.687: 99.0947% ( 5) 00:13:43.008 16.687 - 16.782: 99.1399% ( 6) 00:13:43.008 16.782 - 16.877: 99.2078% ( 9) 00:13:43.008 16.877 - 16.972: 99.2305% ( 3) 00:13:43.008 16.972 - 17.067: 99.2456% ( 2) 00:13:43.008 17.067 - 17.161: 99.2682% ( 3) 00:13:43.008 17.256 - 17.351: 99.2908% ( 3) 00:13:43.008 17.351 - 17.446: 99.2984% ( 1) 00:13:43.008 17.446 - 17.541: 99.3135% ( 2) 00:13:43.008 17.541 - 17.636: 99.3210% ( 1) 00:13:43.008 17.636 - 17.730: 99.3286% ( 1) 00:13:43.008 17.730 - 17.825: 99.3361% ( 1) 00:13:43.008 17.825 - 17.920: 99.3436% ( 1) 00:13:43.008 17.920 - 18.015: 99.3587% ( 2) 00:13:43.008 18.015 - 18.110: 99.3738% ( 2) 00:13:43.008 18.110 - 18.204: 99.3814% ( 1) 00:13:43.008 18.773 - 18.868: 99.3889% ( 1) 00:13:43.008 30.151 - 30.341: 99.3965% ( 1) 00:13:43.008 38.874 - 39.064: 99.4040% ( 1) 00:13:43.008 1201.493 - 1207.561: 99.4115% ( 1) 00:13:43.008 3980.705 - 4004.978: 99.8189% ( 54) 00:13:43.008 4004.978 - 4029.250: 99.9925% ( 23) 00:13:43.008 5995.330 - 6019.603: 100.0000% ( 1) 00:13:43.008 00:13:43.008 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:43.008 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:43.009 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:43.009 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:43.009 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:43.267 [ 00:13:43.267 { 00:13:43.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:43.267 "subtype": "Discovery", 00:13:43.267 "listen_addresses": [], 00:13:43.267 "allow_any_host": true, 00:13:43.267 "hosts": [] 00:13:43.267 }, 00:13:43.267 { 00:13:43.267 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:43.267 "subtype": "NVMe", 00:13:43.267 "listen_addresses": [ 00:13:43.267 { 00:13:43.267 "trtype": "VFIOUSER", 00:13:43.267 "adrfam": "IPv4", 00:13:43.267 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:43.267 "trsvcid": "0" 00:13:43.267 } 00:13:43.267 ], 00:13:43.267 "allow_any_host": true, 00:13:43.267 "hosts": [], 00:13:43.267 "serial_number": "SPDK1", 00:13:43.267 "model_number": "SPDK bdev Controller", 00:13:43.267 "max_namespaces": 32, 00:13:43.267 "min_cntlid": 1, 00:13:43.267 "max_cntlid": 65519, 00:13:43.267 "namespaces": [ 00:13:43.267 { 00:13:43.267 "nsid": 1, 00:13:43.267 "bdev_name": "Malloc1", 00:13:43.267 "name": "Malloc1", 00:13:43.267 "nguid": "52443612A6CC4555B60C824715D0E386", 00:13:43.267 "uuid": "52443612-a6cc-4555-b60c-824715d0e386" 00:13:43.267 }, 00:13:43.267 { 00:13:43.267 "nsid": 2, 00:13:43.267 "bdev_name": "Malloc3", 00:13:43.267 "name": "Malloc3", 00:13:43.267 "nguid": "2EC91CCCBAC44F3DB0848B894B7E7847", 00:13:43.267 "uuid": "2ec91ccc-bac4-4f3d-b084-8b894b7e7847" 00:13:43.267 } 00:13:43.267 ] 00:13:43.267 }, 00:13:43.267 { 00:13:43.267 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:43.267 "subtype": "NVMe", 00:13:43.267 "listen_addresses": [ 00:13:43.267 { 00:13:43.267 "trtype": "VFIOUSER", 00:13:43.267 "adrfam": "IPv4", 00:13:43.267 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:43.267 "trsvcid": "0" 00:13:43.267 } 00:13:43.267 ], 00:13:43.267 "allow_any_host": true, 00:13:43.267 "hosts": [], 00:13:43.267 "serial_number": "SPDK2", 00:13:43.267 "model_number": "SPDK bdev Controller", 00:13:43.267 "max_namespaces": 32, 00:13:43.267 "min_cntlid": 1, 00:13:43.267 "max_cntlid": 65519, 00:13:43.267 "namespaces": [ 00:13:43.267 { 00:13:43.267 "nsid": 1, 00:13:43.267 "bdev_name": "Malloc2", 00:13:43.267 "name": "Malloc2", 00:13:43.267 "nguid": "3FBB8EDB7D514C718EA505BDB6C1C475", 00:13:43.267 "uuid": "3fbb8edb-7d51-4c71-8ea5-05bdb6c1c475" 00:13:43.267 } 00:13:43.267 ] 00:13:43.267 } 00:13:43.267 ] 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2767132 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:43.267 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:43.526 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.526 [2024-07-24 17:56:29.661542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:43.526 Malloc4 00:13:43.526 17:56:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:43.784 [2024-07-24 17:56:30.032345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:43.784 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.042 Asynchronous Event Request test 00:13:44.042 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.042 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.042 Registering asynchronous event callbacks... 00:13:44.042 Starting namespace attribute notice tests for all controllers... 00:13:44.042 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:44.042 aer_cb - Changed Namespace 00:13:44.042 Cleaning up... 00:13:44.042 [ 00:13:44.042 { 00:13:44.042 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.042 "subtype": "Discovery", 00:13:44.043 "listen_addresses": [], 00:13:44.043 "allow_any_host": true, 00:13:44.043 "hosts": [] 00:13:44.043 }, 00:13:44.043 { 00:13:44.043 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.043 "subtype": "NVMe", 00:13:44.043 "listen_addresses": [ 00:13:44.043 { 00:13:44.043 "trtype": "VFIOUSER", 00:13:44.043 "adrfam": "IPv4", 00:13:44.043 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.043 "trsvcid": "0" 00:13:44.043 } 00:13:44.043 ], 00:13:44.043 "allow_any_host": true, 00:13:44.043 "hosts": [], 00:13:44.043 "serial_number": "SPDK1", 00:13:44.043 "model_number": "SPDK bdev Controller", 00:13:44.043 "max_namespaces": 32, 00:13:44.043 "min_cntlid": 1, 00:13:44.043 "max_cntlid": 65519, 00:13:44.043 "namespaces": [ 00:13:44.043 { 00:13:44.043 "nsid": 1, 00:13:44.043 "bdev_name": "Malloc1", 00:13:44.043 "name": "Malloc1", 00:13:44.043 "nguid": "52443612A6CC4555B60C824715D0E386", 00:13:44.043 "uuid": "52443612-a6cc-4555-b60c-824715d0e386" 00:13:44.043 }, 00:13:44.043 { 00:13:44.043 "nsid": 2, 00:13:44.043 "bdev_name": "Malloc3", 00:13:44.043 "name": "Malloc3", 00:13:44.043 "nguid": "2EC91CCCBAC44F3DB0848B894B7E7847", 00:13:44.043 "uuid": "2ec91ccc-bac4-4f3d-b084-8b894b7e7847" 00:13:44.043 } 00:13:44.043 ] 00:13:44.043 }, 00:13:44.043 { 00:13:44.043 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.043 "subtype": "NVMe", 00:13:44.043 "listen_addresses": [ 00:13:44.043 { 00:13:44.043 "trtype": "VFIOUSER", 00:13:44.043 "adrfam": "IPv4", 00:13:44.043 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.043 "trsvcid": "0" 00:13:44.043 } 00:13:44.043 ], 00:13:44.043 "allow_any_host": true, 00:13:44.043 "hosts": [], 00:13:44.043 "serial_number": "SPDK2", 00:13:44.043 "model_number": "SPDK bdev Controller", 00:13:44.043 "max_namespaces": 32, 00:13:44.043 "min_cntlid": 1, 00:13:44.043 "max_cntlid": 65519, 00:13:44.043 "namespaces": [ 00:13:44.043 { 00:13:44.043 "nsid": 1, 00:13:44.043 "bdev_name": "Malloc2", 00:13:44.043 "name": "Malloc2", 00:13:44.043 "nguid": "3FBB8EDB7D514C718EA505BDB6C1C475", 00:13:44.043 "uuid": "3fbb8edb-7d51-4c71-8ea5-05bdb6c1c475" 00:13:44.043 }, 00:13:44.043 { 00:13:44.043 "nsid": 2, 00:13:44.043 "bdev_name": "Malloc4", 00:13:44.043 "name": "Malloc4", 00:13:44.043 "nguid": "FFA2B02B296A432F9B15631ED62B42C5", 00:13:44.043 "uuid": "ffa2b02b-296a-432f-9b15-631ed62b42c5" 00:13:44.043 } 00:13:44.043 ] 00:13:44.043 } 00:13:44.043 ] 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2767132 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2761507 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2761507 ']' 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2761507 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.043 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761507 00:13:44.300 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:44.300 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:44.300 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761507' 00:13:44.300 killing process with pid 2761507 00:13:44.300 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2761507 00:13:44.300 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2761507 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2767273 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2767273' 00:13:44.559 Process pid: 2767273 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2767273 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2767273 ']' 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.559 17:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:44.559 [2024-07-24 17:56:30.782907] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:44.559 [2024-07-24 17:56:30.784028] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:13:44.559 [2024-07-24 17:56:30.784123] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.559 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.817 [2024-07-24 17:56:30.848340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.818 [2024-07-24 17:56:30.970171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.818 [2024-07-24 17:56:30.970220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.818 [2024-07-24 17:56:30.970253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.818 [2024-07-24 17:56:30.970268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.818 [2024-07-24 17:56:30.970280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.818 [2024-07-24 17:56:30.970348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.818 [2024-07-24 17:56:30.970429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.818 [2024-07-24 17:56:30.970405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.818 [2024-07-24 17:56:30.970431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.818 [2024-07-24 17:56:31.082949] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:44.818 [2024-07-24 17:56:31.083259] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:44.818 [2024-07-24 17:56:31.083527] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:44.818 [2024-07-24 17:56:31.084186] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:44.818 [2024-07-24 17:56:31.084454] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:45.751 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.751 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:45.751 17:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:46.684 17:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:46.942 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:46.942 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:46.942 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:46.942 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:46.942 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:47.201 Malloc1 00:13:47.201 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:47.459 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:47.717 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:47.974 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:47.974 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:47.974 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:48.539 Malloc2 00:13:48.539 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:48.797 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:49.054 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2767273 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2767273 ']' 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2767273 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2767273 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2767273' 00:13:49.312 killing process with pid 2767273 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2767273 00:13:49.312 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2767273 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:49.571 00:13:49.571 real 0m53.804s 00:13:49.571 user 3m32.090s 00:13:49.571 sys 0m4.703s 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 ************************************ 00:13:49.571 END TEST nvmf_vfio_user 00:13:49.571 ************************************ 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 ************************************ 00:13:49.571 START TEST nvmf_vfio_user_nvme_compliance 00:13:49.571 ************************************ 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:49.571 * Looking for test storage... 00:13:49.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.571 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2767998 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2767998' 00:13:49.572 Process pid: 2767998 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2767998 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2767998 ']' 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.572 17:56:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:49.830 [2024-07-24 17:56:35.859720] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:13:49.830 [2024-07-24 17:56:35.859810] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.830 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.830 [2024-07-24 17:56:35.918076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:49.830 [2024-07-24 17:56:36.027385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.830 [2024-07-24 17:56:36.027457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.830 [2024-07-24 17:56:36.027485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.830 [2024-07-24 17:56:36.027498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.830 [2024-07-24 17:56:36.027509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.830 [2024-07-24 17:56:36.027598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.830 [2024-07-24 17:56:36.027652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.830 [2024-07-24 17:56:36.027670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.089 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.089 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:50.089 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 malloc0 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.022 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.023 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.023 17:56:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:51.023 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.281 00:13:51.281 00:13:51.281 CUnit - A unit testing framework for C - Version 2.1-3 00:13:51.281 http://cunit.sourceforge.net/ 00:13:51.281 00:13:51.281 00:13:51.281 Suite: nvme_compliance 00:13:51.281 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 17:56:37.388684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.281 [2024-07-24 17:56:37.390170] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:51.281 [2024-07-24 17:56:37.390198] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:51.281 [2024-07-24 17:56:37.390211] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:51.281 [2024-07-24 17:56:37.391703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.281 passed 00:13:51.281 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 17:56:37.479298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.281 [2024-07-24 17:56:37.482320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.281 passed 00:13:51.539 Test: admin_identify_ns ...[2024-07-24 17:56:37.571363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.539 [2024-07-24 17:56:37.631119] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:51.539 [2024-07-24 17:56:37.639117] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:51.539 [2024-07-24 17:56:37.660230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.539 passed 00:13:51.539 Test: admin_get_features_mandatory_features ...[2024-07-24 17:56:37.746398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.539 [2024-07-24 17:56:37.749430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.539 passed 00:13:51.796 Test: admin_get_features_optional_features ...[2024-07-24 17:56:37.836009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.796 [2024-07-24 17:56:37.839035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.796 passed 00:13:51.797 Test: admin_set_features_number_of_queues ...[2024-07-24 17:56:37.925467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:51.797 [2024-07-24 17:56:38.030225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:51.797 passed 00:13:52.055 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 17:56:38.115431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.055 [2024-07-24 17:56:38.118466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.055 passed 00:13:52.055 Test: admin_get_log_page_with_lpo ...[2024-07-24 17:56:38.204538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.055 [2024-07-24 17:56:38.272130] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:52.055 [2024-07-24 17:56:38.285189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.055 passed 00:13:52.313 Test: fabric_property_get ...[2024-07-24 17:56:38.369506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.313 [2024-07-24 17:56:38.373789] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:52.313 [2024-07-24 17:56:38.375541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.313 passed 00:13:52.313 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 17:56:38.460067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.313 [2024-07-24 17:56:38.461396] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:52.313 [2024-07-24 17:56:38.463107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.313 passed 00:13:52.313 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 17:56:38.548591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.570 [2024-07-24 17:56:38.632111] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:52.570 [2024-07-24 17:56:38.648115] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:52.570 [2024-07-24 17:56:38.653209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.570 passed 00:13:52.570 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 17:56:38.741494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.570 [2024-07-24 17:56:38.742758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:52.570 [2024-07-24 17:56:38.744511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.570 passed 00:13:52.570 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 17:56:38.829816] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.828 [2024-07-24 17:56:38.905130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:52.828 [2024-07-24 17:56:38.929115] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:52.828 [2024-07-24 17:56:38.934220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.828 passed 00:13:52.828 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 17:56:39.017858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:52.828 [2024-07-24 17:56:39.019181] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:52.828 [2024-07-24 17:56:39.019223] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:52.828 [2024-07-24 17:56:39.020881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:52.828 passed 00:13:53.085 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 17:56:39.103689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.085 [2024-07-24 17:56:39.195120] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:53.085 [2024-07-24 17:56:39.203114] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:53.085 [2024-07-24 17:56:39.211110] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:53.085 [2024-07-24 17:56:39.219113] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:53.085 [2024-07-24 17:56:39.248255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.085 passed 00:13:53.085 Test: admin_create_io_sq_verify_pc ...[2024-07-24 17:56:39.334398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.085 [2024-07-24 17:56:39.349138] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:53.343 [2024-07-24 17:56:39.366756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.343 passed 00:13:53.343 Test: admin_create_io_qp_max_qps ...[2024-07-24 17:56:39.455388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.777 [2024-07-24 17:56:40.549120] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:54.777 [2024-07-24 17:56:40.936788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.777 passed 00:13:55.045 Test: admin_create_io_sq_shared_cq ...[2024-07-24 17:56:41.021077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.045 [2024-07-24 17:56:41.155116] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:55.045 [2024-07-24 17:56:41.192205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.045 passed 00:13:55.045 00:13:55.045 Run Summary: Type Total Ran Passed Failed Inactive 00:13:55.045 suites 1 1 n/a 0 0 00:13:55.045 tests 18 18 18 0 0 00:13:55.045 asserts 360 360 360 0 n/a 00:13:55.045 00:13:55.045 Elapsed time = 1.579 seconds 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2767998 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2767998 ']' 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2767998 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2767998 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2767998' 00:13:55.045 killing process with pid 2767998 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2767998 00:13:55.045 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2767998 00:13:55.303 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:55.303 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:55.303 00:13:55.303 real 0m5.820s 00:13:55.303 user 0m16.299s 00:13:55.303 sys 0m0.536s 00:13:55.303 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.303 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:55.303 ************************************ 00:13:55.303 END TEST nvmf_vfio_user_nvme_compliance 00:13:55.303 ************************************ 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.562 ************************************ 00:13:55.562 START TEST nvmf_vfio_user_fuzz 00:13:55.562 ************************************ 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:55.562 * Looking for test storage... 00:13:55.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.562 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2768725 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2768725' 00:13:55.563 Process pid: 2768725 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2768725 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2768725 ']' 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.563 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:55.821 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.821 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:55.821 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:56.754 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:56.754 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.754 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.012 malloc0 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:57.012 17:56:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:29.073 Fuzzing completed. Shutting down the fuzz application 00:14:29.073 00:14:29.073 Dumping successful admin opcodes: 00:14:29.073 8, 9, 10, 24, 00:14:29.073 Dumping successful io opcodes: 00:14:29.073 0, 00:14:29.073 NS: 0x200003a1ef00 I/O qp, Total commands completed: 599243, total successful commands: 2318, random_seed: 360388352 00:14:29.073 NS: 0x200003a1ef00 admin qp, Total commands completed: 123146, total successful commands: 1009, random_seed: 3801833216 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2768725 ']' 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2768725' 00:14:29.073 killing process with pid 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2768725 00:14:29.073 17:57:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:29.073 00:14:29.073 real 0m32.388s 00:14:29.073 user 0m31.345s 00:14:29.073 sys 0m29.969s 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:29.073 ************************************ 00:14:29.073 END TEST nvmf_vfio_user_fuzz 00:14:29.073 ************************************ 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.073 ************************************ 00:14:29.073 START TEST nvmf_auth_target 00:14:29.073 ************************************ 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:29.073 * Looking for test storage... 00:14:29.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.073 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.074 17:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.011 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:30.012 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:30.012 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:30.012 Found net devices under 0000:09:00.0: cvl_0_0 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:30.012 Found net devices under 0000:09:00.1: cvl_0_1 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:14:30.012 00:14:30.012 --- 10.0.0.2 ping statistics --- 00:14:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.012 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:14:30.012 00:14:30.012 --- 10.0.0.1 ping statistics --- 00:14:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.012 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2774788 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2774788 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2774788 ']' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.012 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2774814 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3b34d029d4244ad76bf1069f84798bb5eec850ee483abe7b 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3EM 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3b34d029d4244ad76bf1069f84798bb5eec850ee483abe7b 0 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3b34d029d4244ad76bf1069f84798bb5eec850ee483abe7b 0 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3b34d029d4244ad76bf1069f84798bb5eec850ee483abe7b 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3EM 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3EM 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.3EM 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75d306f398308704f2c83306c67468654945784d8afb2a00ec36e8e3d02d4dd1 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nJl 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75d306f398308704f2c83306c67468654945784d8afb2a00ec36e8e3d02d4dd1 3 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75d306f398308704f2c83306c67468654945784d8afb2a00ec36e8e3d02d4dd1 3 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75d306f398308704f2c83306c67468654945784d8afb2a00ec36e8e3d02d4dd1 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nJl 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nJl 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.nJl 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a5c64207119ff6c08a4c930b521a877e 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Pcj 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a5c64207119ff6c08a4c930b521a877e 1 00:14:30.580 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a5c64207119ff6c08a4c930b521a877e 1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a5c64207119ff6c08a4c930b521a877e 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Pcj 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Pcj 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Pcj 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a87bd39b67cf4ddf75cc272b38bd1e34647c043cd5fd2abb 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.O2O 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a87bd39b67cf4ddf75cc272b38bd1e34647c043cd5fd2abb 2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a87bd39b67cf4ddf75cc272b38bd1e34647c043cd5fd2abb 2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a87bd39b67cf4ddf75cc272b38bd1e34647c043cd5fd2abb 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.O2O 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.O2O 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.O2O 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ab103f7a14683fc23b1c2528849e55daa8a7d47be7157247 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UF1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ab103f7a14683fc23b1c2528849e55daa8a7d47be7157247 2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ab103f7a14683fc23b1c2528849e55daa8a7d47be7157247 2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ab103f7a14683fc23b1c2528849e55daa8a7d47be7157247 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UF1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UF1 00:14:30.581 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.UF1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=99b4241601f99d677c6f1f42036005a3 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Zyq 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 99b4241601f99d677c6f1f42036005a3 1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 99b4241601f99d677c6f1f42036005a3 1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=99b4241601f99d677c6f1f42036005a3 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Zyq 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Zyq 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Zyq 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4d1ea2560733b0f109643716ef1dcd5fa6b37e3b61c6517db7e25b5fab15bd9 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.r8b 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4d1ea2560733b0f109643716ef1dcd5fa6b37e3b61c6517db7e25b5fab15bd9 3 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4d1ea2560733b0f109643716ef1dcd5fa6b37e3b61c6517db7e25b5fab15bd9 3 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4d1ea2560733b0f109643716ef1dcd5fa6b37e3b61c6517db7e25b5fab15bd9 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.r8b 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.r8b 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.r8b 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2774788 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2774788 ']' 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.840 17:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2774814 /var/tmp/host.sock 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2774814 ']' 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:31.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.098 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3EM 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.356 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.357 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3EM 00:14:31.357 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3EM 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.nJl ]] 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nJl 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nJl 00:14:31.614 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nJl 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Pcj 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Pcj 00:14:31.872 17:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Pcj 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.O2O ]] 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2O 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2O 00:14:32.130 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O2O 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UF1 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UF1 00:14:32.388 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UF1 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Zyq ]] 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zyq 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zyq 00:14:32.646 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Zyq 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r8b 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.r8b 00:14:32.905 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.r8b 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.163 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.422 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.680 00:14:33.680 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.680 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.680 17:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.939 { 00:14:33.939 "cntlid": 1, 00:14:33.939 "qid": 0, 00:14:33.939 "state": "enabled", 00:14:33.939 "thread": "nvmf_tgt_poll_group_000", 00:14:33.939 "listen_address": { 00:14:33.939 "trtype": "TCP", 00:14:33.939 "adrfam": "IPv4", 00:14:33.939 "traddr": "10.0.0.2", 00:14:33.939 "trsvcid": "4420" 00:14:33.939 }, 00:14:33.939 "peer_address": { 00:14:33.939 "trtype": "TCP", 00:14:33.939 "adrfam": "IPv4", 00:14:33.939 "traddr": "10.0.0.1", 00:14:33.939 "trsvcid": "36932" 00:14:33.939 }, 00:14:33.939 "auth": { 00:14:33.939 "state": "completed", 00:14:33.939 "digest": "sha256", 00:14:33.939 "dhgroup": "null" 00:14:33.939 } 00:14:33.939 } 00:14:33.939 ]' 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:33.939 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.197 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.198 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.198 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.456 17:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.389 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.647 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.648 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.648 17:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.906 00:14:35.906 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.906 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.906 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.164 { 00:14:36.164 "cntlid": 3, 00:14:36.164 "qid": 0, 00:14:36.164 "state": "enabled", 00:14:36.164 "thread": "nvmf_tgt_poll_group_000", 00:14:36.164 "listen_address": { 00:14:36.164 "trtype": "TCP", 00:14:36.164 "adrfam": "IPv4", 00:14:36.164 "traddr": "10.0.0.2", 00:14:36.164 "trsvcid": "4420" 00:14:36.164 }, 00:14:36.164 "peer_address": { 00:14:36.164 "trtype": "TCP", 00:14:36.164 "adrfam": "IPv4", 00:14:36.164 "traddr": "10.0.0.1", 00:14:36.164 "trsvcid": "36970" 00:14:36.164 }, 00:14:36.164 "auth": { 00:14:36.164 "state": "completed", 00:14:36.164 "digest": "sha256", 00:14:36.164 "dhgroup": "null" 00:14:36.164 } 00:14:36.164 } 00:14:36.164 ]' 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.164 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.422 17:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.354 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.355 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.612 17:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.178 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.178 { 00:14:38.178 "cntlid": 5, 00:14:38.178 "qid": 0, 00:14:38.178 "state": "enabled", 00:14:38.178 "thread": "nvmf_tgt_poll_group_000", 00:14:38.178 "listen_address": { 00:14:38.178 "trtype": "TCP", 00:14:38.178 "adrfam": "IPv4", 00:14:38.178 "traddr": "10.0.0.2", 00:14:38.178 "trsvcid": "4420" 00:14:38.178 }, 00:14:38.178 "peer_address": { 00:14:38.178 "trtype": "TCP", 00:14:38.178 "adrfam": "IPv4", 00:14:38.178 "traddr": "10.0.0.1", 00:14:38.178 "trsvcid": "36986" 00:14:38.178 }, 00:14:38.178 "auth": { 00:14:38.178 "state": "completed", 00:14:38.178 "digest": "sha256", 00:14:38.178 "dhgroup": "null" 00:14:38.178 } 00:14:38.178 } 00:14:38.178 ]' 00:14:38.178 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.436 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.693 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:39.627 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.916 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:40.175 00:14:40.175 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.175 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.175 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.433 { 00:14:40.433 "cntlid": 7, 00:14:40.433 "qid": 0, 00:14:40.433 "state": "enabled", 00:14:40.433 "thread": "nvmf_tgt_poll_group_000", 00:14:40.433 "listen_address": { 00:14:40.433 "trtype": "TCP", 00:14:40.433 "adrfam": "IPv4", 00:14:40.433 "traddr": "10.0.0.2", 00:14:40.433 "trsvcid": "4420" 00:14:40.433 }, 00:14:40.433 "peer_address": { 00:14:40.433 "trtype": "TCP", 00:14:40.433 "adrfam": "IPv4", 00:14:40.433 "traddr": "10.0.0.1", 00:14:40.433 "trsvcid": "42596" 00:14:40.433 }, 00:14:40.433 "auth": { 00:14:40.433 "state": "completed", 00:14:40.433 "digest": "sha256", 00:14:40.433 "dhgroup": "null" 00:14:40.433 } 00:14:40.433 } 00:14:40.433 ]' 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.433 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.691 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.621 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.877 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:41.877 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.877 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.878 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.135 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.392 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.648 { 00:14:42.648 "cntlid": 9, 00:14:42.648 "qid": 0, 00:14:42.648 "state": "enabled", 00:14:42.648 "thread": "nvmf_tgt_poll_group_000", 00:14:42.648 "listen_address": { 00:14:42.648 "trtype": "TCP", 00:14:42.648 "adrfam": "IPv4", 00:14:42.648 "traddr": "10.0.0.2", 00:14:42.648 "trsvcid": "4420" 00:14:42.648 }, 00:14:42.648 "peer_address": { 00:14:42.648 "trtype": "TCP", 00:14:42.648 "adrfam": "IPv4", 00:14:42.648 "traddr": "10.0.0.1", 00:14:42.648 "trsvcid": "42622" 00:14:42.648 }, 00:14:42.648 "auth": { 00:14:42.648 "state": "completed", 00:14:42.648 "digest": "sha256", 00:14:42.648 "dhgroup": "ffdhe2048" 00:14:42.648 } 00:14:42.648 } 00:14:42.648 ]' 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.648 17:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.905 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:14:43.835 17:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.836 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.093 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.350 00:14:44.350 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.350 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.350 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.607 { 00:14:44.607 "cntlid": 11, 00:14:44.607 "qid": 0, 00:14:44.607 "state": "enabled", 00:14:44.607 "thread": "nvmf_tgt_poll_group_000", 00:14:44.607 "listen_address": { 00:14:44.607 "trtype": "TCP", 00:14:44.607 "adrfam": "IPv4", 00:14:44.607 "traddr": "10.0.0.2", 00:14:44.607 "trsvcid": "4420" 00:14:44.607 }, 00:14:44.607 "peer_address": { 00:14:44.607 "trtype": "TCP", 00:14:44.607 "adrfam": "IPv4", 00:14:44.607 "traddr": "10.0.0.1", 00:14:44.607 "trsvcid": "42664" 00:14:44.607 }, 00:14:44.607 "auth": { 00:14:44.607 "state": "completed", 00:14:44.607 "digest": "sha256", 00:14:44.607 "dhgroup": "ffdhe2048" 00:14:44.607 } 00:14:44.607 } 00:14:44.607 ]' 00:14:44.607 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.864 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.122 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.053 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.311 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.586 00:14:46.586 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.586 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.586 17:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.844 { 00:14:46.844 "cntlid": 13, 00:14:46.844 "qid": 0, 00:14:46.844 "state": "enabled", 00:14:46.844 "thread": "nvmf_tgt_poll_group_000", 00:14:46.844 "listen_address": { 00:14:46.844 "trtype": "TCP", 00:14:46.844 "adrfam": "IPv4", 00:14:46.844 "traddr": "10.0.0.2", 00:14:46.844 "trsvcid": "4420" 00:14:46.844 }, 00:14:46.844 "peer_address": { 00:14:46.844 "trtype": "TCP", 00:14:46.844 "adrfam": "IPv4", 00:14:46.844 "traddr": "10.0.0.1", 00:14:46.844 "trsvcid": "42680" 00:14:46.844 }, 00:14:46.844 "auth": { 00:14:46.844 "state": "completed", 00:14:46.844 "digest": "sha256", 00:14:46.844 "dhgroup": "ffdhe2048" 00:14:46.844 } 00:14:46.844 } 00:14:46.844 ]' 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.844 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.101 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.101 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.101 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.101 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.101 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.358 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:48.292 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.550 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.808 00:14:48.808 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.808 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.808 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.065 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.065 { 00:14:49.065 "cntlid": 15, 00:14:49.065 "qid": 0, 00:14:49.065 "state": "enabled", 00:14:49.065 "thread": "nvmf_tgt_poll_group_000", 00:14:49.065 "listen_address": { 00:14:49.065 "trtype": "TCP", 00:14:49.065 "adrfam": "IPv4", 00:14:49.065 "traddr": "10.0.0.2", 00:14:49.065 "trsvcid": "4420" 00:14:49.065 }, 00:14:49.066 "peer_address": { 00:14:49.066 "trtype": "TCP", 00:14:49.066 "adrfam": "IPv4", 00:14:49.066 "traddr": "10.0.0.1", 00:14:49.066 "trsvcid": "42718" 00:14:49.066 }, 00:14:49.066 "auth": { 00:14:49.066 "state": "completed", 00:14:49.066 "digest": "sha256", 00:14:49.066 "dhgroup": "ffdhe2048" 00:14:49.066 } 00:14:49.066 } 00:14:49.066 ]' 00:14:49.066 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.066 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.066 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.066 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.066 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.324 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.324 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.324 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.582 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:50.515 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.773 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.031 00:14:51.031 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.031 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.031 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.289 { 00:14:51.289 "cntlid": 17, 00:14:51.289 "qid": 0, 00:14:51.289 "state": "enabled", 00:14:51.289 "thread": "nvmf_tgt_poll_group_000", 00:14:51.289 "listen_address": { 00:14:51.289 "trtype": "TCP", 00:14:51.289 "adrfam": "IPv4", 00:14:51.289 "traddr": "10.0.0.2", 00:14:51.289 "trsvcid": "4420" 00:14:51.289 }, 00:14:51.289 "peer_address": { 00:14:51.289 "trtype": "TCP", 00:14:51.289 "adrfam": "IPv4", 00:14:51.289 "traddr": "10.0.0.1", 00:14:51.289 "trsvcid": "49594" 00:14:51.289 }, 00:14:51.289 "auth": { 00:14:51.289 "state": "completed", 00:14:51.289 "digest": "sha256", 00:14:51.289 "dhgroup": "ffdhe3072" 00:14:51.289 } 00:14:51.289 } 00:14:51.289 ]' 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.289 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.547 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:14:52.481 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.737 17:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.994 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.995 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.251 00:14:53.251 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.251 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.251 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.508 { 00:14:53.508 "cntlid": 19, 00:14:53.508 "qid": 0, 00:14:53.508 "state": "enabled", 00:14:53.508 "thread": "nvmf_tgt_poll_group_000", 00:14:53.508 "listen_address": { 00:14:53.508 "trtype": "TCP", 00:14:53.508 "adrfam": "IPv4", 00:14:53.508 "traddr": "10.0.0.2", 00:14:53.508 "trsvcid": "4420" 00:14:53.508 }, 00:14:53.508 "peer_address": { 00:14:53.508 "trtype": "TCP", 00:14:53.508 "adrfam": "IPv4", 00:14:53.508 "traddr": "10.0.0.1", 00:14:53.508 "trsvcid": "49610" 00:14:53.508 }, 00:14:53.508 "auth": { 00:14:53.508 "state": "completed", 00:14:53.508 "digest": "sha256", 00:14:53.508 "dhgroup": "ffdhe3072" 00:14:53.508 } 00:14:53.508 } 00:14:53.508 ]' 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.508 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.766 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:14:54.699 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.699 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.699 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.699 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.956 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.956 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.956 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.956 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.956 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.522 00:14:55.522 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.522 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.522 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.779 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.780 { 00:14:55.780 "cntlid": 21, 00:14:55.780 "qid": 0, 00:14:55.780 "state": "enabled", 00:14:55.780 "thread": "nvmf_tgt_poll_group_000", 00:14:55.780 "listen_address": { 00:14:55.780 "trtype": "TCP", 00:14:55.780 "adrfam": "IPv4", 00:14:55.780 "traddr": "10.0.0.2", 00:14:55.780 "trsvcid": "4420" 00:14:55.780 }, 00:14:55.780 "peer_address": { 00:14:55.780 "trtype": "TCP", 00:14:55.780 "adrfam": "IPv4", 00:14:55.780 "traddr": "10.0.0.1", 00:14:55.780 "trsvcid": "49640" 00:14:55.780 }, 00:14:55.780 "auth": { 00:14:55.780 "state": "completed", 00:14:55.780 "digest": "sha256", 00:14:55.780 "dhgroup": "ffdhe3072" 00:14:55.780 } 00:14:55.780 } 00:14:55.780 ]' 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.780 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.038 17:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.035 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.296 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.555 00:14:57.555 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.555 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.555 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.813 { 00:14:57.813 "cntlid": 23, 00:14:57.813 "qid": 0, 00:14:57.813 "state": "enabled", 00:14:57.813 "thread": "nvmf_tgt_poll_group_000", 00:14:57.813 "listen_address": { 00:14:57.813 "trtype": "TCP", 00:14:57.813 "adrfam": "IPv4", 00:14:57.813 "traddr": "10.0.0.2", 00:14:57.813 "trsvcid": "4420" 00:14:57.813 }, 00:14:57.813 "peer_address": { 00:14:57.813 "trtype": "TCP", 00:14:57.813 "adrfam": "IPv4", 00:14:57.813 "traddr": "10.0.0.1", 00:14:57.813 "trsvcid": "49672" 00:14:57.813 }, 00:14:57.813 "auth": { 00:14:57.813 "state": "completed", 00:14:57.813 "digest": "sha256", 00:14:57.813 "dhgroup": "ffdhe3072" 00:14:57.813 } 00:14:57.813 } 00:14:57.813 ]' 00:14:57.813 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.071 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.329 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.262 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.520 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.778 00:14:59.778 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.778 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.778 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.035 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.035 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.035 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.035 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.293 { 00:15:00.293 "cntlid": 25, 00:15:00.293 "qid": 0, 00:15:00.293 "state": "enabled", 00:15:00.293 "thread": "nvmf_tgt_poll_group_000", 00:15:00.293 "listen_address": { 00:15:00.293 "trtype": "TCP", 00:15:00.293 "adrfam": "IPv4", 00:15:00.293 "traddr": "10.0.0.2", 00:15:00.293 "trsvcid": "4420" 00:15:00.293 }, 00:15:00.293 "peer_address": { 00:15:00.293 "trtype": "TCP", 00:15:00.293 "adrfam": "IPv4", 00:15:00.293 "traddr": "10.0.0.1", 00:15:00.293 "trsvcid": "35190" 00:15:00.293 }, 00:15:00.293 "auth": { 00:15:00.293 "state": "completed", 00:15:00.293 "digest": "sha256", 00:15:00.293 "dhgroup": "ffdhe4096" 00:15:00.293 } 00:15:00.293 } 00:15:00.293 ]' 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.293 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.550 17:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.483 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.741 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.998 00:15:02.257 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.257 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.257 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.515 { 00:15:02.515 "cntlid": 27, 00:15:02.515 "qid": 0, 00:15:02.515 "state": "enabled", 00:15:02.515 "thread": "nvmf_tgt_poll_group_000", 00:15:02.515 "listen_address": { 00:15:02.515 "trtype": "TCP", 00:15:02.515 "adrfam": "IPv4", 00:15:02.515 "traddr": "10.0.0.2", 00:15:02.515 "trsvcid": "4420" 00:15:02.515 }, 00:15:02.515 "peer_address": { 00:15:02.515 "trtype": "TCP", 00:15:02.515 "adrfam": "IPv4", 00:15:02.515 "traddr": "10.0.0.1", 00:15:02.515 "trsvcid": "35212" 00:15:02.515 }, 00:15:02.515 "auth": { 00:15:02.515 "state": "completed", 00:15:02.515 "digest": "sha256", 00:15:02.515 "dhgroup": "ffdhe4096" 00:15:02.515 } 00:15:02.515 } 00:15:02.515 ]' 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.515 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.772 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.704 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.962 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.529 00:15:04.529 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.529 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.529 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.529 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.786 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.786 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.786 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.786 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.786 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.786 { 00:15:04.786 "cntlid": 29, 00:15:04.786 "qid": 0, 00:15:04.786 "state": "enabled", 00:15:04.786 "thread": "nvmf_tgt_poll_group_000", 00:15:04.786 "listen_address": { 00:15:04.786 "trtype": "TCP", 00:15:04.786 "adrfam": "IPv4", 00:15:04.786 "traddr": "10.0.0.2", 00:15:04.786 "trsvcid": "4420" 00:15:04.786 }, 00:15:04.786 "peer_address": { 00:15:04.786 "trtype": "TCP", 00:15:04.787 "adrfam": "IPv4", 00:15:04.787 "traddr": "10.0.0.1", 00:15:04.787 "trsvcid": "35226" 00:15:04.787 }, 00:15:04.787 "auth": { 00:15:04.787 "state": "completed", 00:15:04.787 "digest": "sha256", 00:15:04.787 "dhgroup": "ffdhe4096" 00:15:04.787 } 00:15:04.787 } 00:15:04.787 ]' 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.787 17:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.045 17:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.978 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.235 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.492 00:15:06.750 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.750 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.750 17:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.008 { 00:15:07.008 "cntlid": 31, 00:15:07.008 "qid": 0, 00:15:07.008 "state": "enabled", 00:15:07.008 "thread": "nvmf_tgt_poll_group_000", 00:15:07.008 "listen_address": { 00:15:07.008 "trtype": "TCP", 00:15:07.008 "adrfam": "IPv4", 00:15:07.008 "traddr": "10.0.0.2", 00:15:07.008 "trsvcid": "4420" 00:15:07.008 }, 00:15:07.008 "peer_address": { 00:15:07.008 "trtype": "TCP", 00:15:07.008 "adrfam": "IPv4", 00:15:07.008 "traddr": "10.0.0.1", 00:15:07.008 "trsvcid": "35240" 00:15:07.008 }, 00:15:07.008 "auth": { 00:15:07.008 "state": "completed", 00:15:07.008 "digest": "sha256", 00:15:07.008 "dhgroup": "ffdhe4096" 00:15:07.008 } 00:15:07.008 } 00:15:07.008 ]' 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.008 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.268 17:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.201 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.459 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.024 00:15:09.024 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.024 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.024 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.282 { 00:15:09.282 "cntlid": 33, 00:15:09.282 "qid": 0, 00:15:09.282 "state": "enabled", 00:15:09.282 "thread": "nvmf_tgt_poll_group_000", 00:15:09.282 "listen_address": { 00:15:09.282 "trtype": "TCP", 00:15:09.282 "adrfam": "IPv4", 00:15:09.282 "traddr": "10.0.0.2", 00:15:09.282 "trsvcid": "4420" 00:15:09.282 }, 00:15:09.282 "peer_address": { 00:15:09.282 "trtype": "TCP", 00:15:09.282 "adrfam": "IPv4", 00:15:09.282 "traddr": "10.0.0.1", 00:15:09.282 "trsvcid": "35274" 00:15:09.282 }, 00:15:09.282 "auth": { 00:15:09.282 "state": "completed", 00:15:09.282 "digest": "sha256", 00:15:09.282 "dhgroup": "ffdhe6144" 00:15:09.282 } 00:15:09.282 } 00:15:09.282 ]' 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.282 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.540 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.540 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.540 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.798 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.731 17:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.989 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.555 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.555 { 00:15:11.555 "cntlid": 35, 00:15:11.555 "qid": 0, 00:15:11.555 "state": "enabled", 00:15:11.555 "thread": "nvmf_tgt_poll_group_000", 00:15:11.555 "listen_address": { 00:15:11.555 "trtype": "TCP", 00:15:11.555 "adrfam": "IPv4", 00:15:11.555 "traddr": "10.0.0.2", 00:15:11.555 "trsvcid": "4420" 00:15:11.555 }, 00:15:11.555 "peer_address": { 00:15:11.555 "trtype": "TCP", 00:15:11.555 "adrfam": "IPv4", 00:15:11.555 "traddr": "10.0.0.1", 00:15:11.555 "trsvcid": "60420" 00:15:11.555 }, 00:15:11.555 "auth": { 00:15:11.555 "state": "completed", 00:15:11.555 "digest": "sha256", 00:15:11.555 "dhgroup": "ffdhe6144" 00:15:11.555 } 00:15:11.555 } 00:15:11.555 ]' 00:15:11.555 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.813 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.071 17:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.003 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.260 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.827 00:15:13.827 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.827 17:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.827 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.084 { 00:15:14.084 "cntlid": 37, 00:15:14.084 "qid": 0, 00:15:14.084 "state": "enabled", 00:15:14.084 "thread": "nvmf_tgt_poll_group_000", 00:15:14.084 "listen_address": { 00:15:14.084 "trtype": "TCP", 00:15:14.084 "adrfam": "IPv4", 00:15:14.084 "traddr": "10.0.0.2", 00:15:14.084 "trsvcid": "4420" 00:15:14.084 }, 00:15:14.084 "peer_address": { 00:15:14.084 "trtype": "TCP", 00:15:14.084 "adrfam": "IPv4", 00:15:14.084 "traddr": "10.0.0.1", 00:15:14.084 "trsvcid": "60440" 00:15:14.084 }, 00:15:14.084 "auth": { 00:15:14.084 "state": "completed", 00:15:14.084 "digest": "sha256", 00:15:14.084 "dhgroup": "ffdhe6144" 00:15:14.084 } 00:15:14.084 } 00:15:14.084 ]' 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.084 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.343 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.343 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.343 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.632 17:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.562 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.818 17:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.381 00:15:16.381 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.382 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.382 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.639 { 00:15:16.639 "cntlid": 39, 00:15:16.639 "qid": 0, 00:15:16.639 "state": "enabled", 00:15:16.639 "thread": "nvmf_tgt_poll_group_000", 00:15:16.639 "listen_address": { 00:15:16.639 "trtype": "TCP", 00:15:16.639 "adrfam": "IPv4", 00:15:16.639 "traddr": "10.0.0.2", 00:15:16.639 "trsvcid": "4420" 00:15:16.639 }, 00:15:16.639 "peer_address": { 00:15:16.639 "trtype": "TCP", 00:15:16.639 "adrfam": "IPv4", 00:15:16.639 "traddr": "10.0.0.1", 00:15:16.639 "trsvcid": "60458" 00:15:16.639 }, 00:15:16.639 "auth": { 00:15:16.639 "state": "completed", 00:15:16.639 "digest": "sha256", 00:15:16.639 "dhgroup": "ffdhe6144" 00:15:16.639 } 00:15:16.639 } 00:15:16.639 ]' 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.639 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.896 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.266 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.198 00:15:19.198 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.198 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.198 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.456 { 00:15:19.456 "cntlid": 41, 00:15:19.456 "qid": 0, 00:15:19.456 "state": "enabled", 00:15:19.456 "thread": "nvmf_tgt_poll_group_000", 00:15:19.456 "listen_address": { 00:15:19.456 "trtype": "TCP", 00:15:19.456 "adrfam": "IPv4", 00:15:19.456 "traddr": "10.0.0.2", 00:15:19.456 "trsvcid": "4420" 00:15:19.456 }, 00:15:19.456 "peer_address": { 00:15:19.456 "trtype": "TCP", 00:15:19.456 "adrfam": "IPv4", 00:15:19.456 "traddr": "10.0.0.1", 00:15:19.456 "trsvcid": "60478" 00:15:19.456 }, 00:15:19.456 "auth": { 00:15:19.456 "state": "completed", 00:15:19.456 "digest": "sha256", 00:15:19.456 "dhgroup": "ffdhe8192" 00:15:19.456 } 00:15:19.456 } 00:15:19.456 ]' 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.456 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.713 17:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.645 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.902 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.160 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.160 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.160 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.093 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.093 { 00:15:22.093 "cntlid": 43, 00:15:22.093 "qid": 0, 00:15:22.093 "state": "enabled", 00:15:22.093 "thread": "nvmf_tgt_poll_group_000", 00:15:22.093 "listen_address": { 00:15:22.093 "trtype": "TCP", 00:15:22.093 "adrfam": "IPv4", 00:15:22.093 "traddr": "10.0.0.2", 00:15:22.093 "trsvcid": "4420" 00:15:22.093 }, 00:15:22.093 "peer_address": { 00:15:22.093 "trtype": "TCP", 00:15:22.093 "adrfam": "IPv4", 00:15:22.093 "traddr": "10.0.0.1", 00:15:22.093 "trsvcid": "36618" 00:15:22.093 }, 00:15:22.093 "auth": { 00:15:22.093 "state": "completed", 00:15:22.093 "digest": "sha256", 00:15:22.093 "dhgroup": "ffdhe8192" 00:15:22.093 } 00:15:22.093 } 00:15:22.093 ]' 00:15:22.093 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.351 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.609 17:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.543 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.801 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.734 00:15:24.734 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.734 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.734 17:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.992 { 00:15:24.992 "cntlid": 45, 00:15:24.992 "qid": 0, 00:15:24.992 "state": "enabled", 00:15:24.992 "thread": "nvmf_tgt_poll_group_000", 00:15:24.992 "listen_address": { 00:15:24.992 "trtype": "TCP", 00:15:24.992 "adrfam": "IPv4", 00:15:24.992 "traddr": "10.0.0.2", 00:15:24.992 "trsvcid": "4420" 00:15:24.992 }, 00:15:24.992 "peer_address": { 00:15:24.992 "trtype": "TCP", 00:15:24.992 "adrfam": "IPv4", 00:15:24.992 "traddr": "10.0.0.1", 00:15:24.992 "trsvcid": "36642" 00:15:24.992 }, 00:15:24.992 "auth": { 00:15:24.992 "state": "completed", 00:15:24.992 "digest": "sha256", 00:15:24.992 "dhgroup": "ffdhe8192" 00:15:24.992 } 00:15:24.992 } 00:15:24.992 ]' 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.992 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.250 17:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.623 17:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.557 00:15:27.557 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.557 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.557 17:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.814 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.814 { 00:15:27.814 "cntlid": 47, 00:15:27.814 "qid": 0, 00:15:27.814 "state": "enabled", 00:15:27.814 "thread": "nvmf_tgt_poll_group_000", 00:15:27.814 "listen_address": { 00:15:27.814 "trtype": "TCP", 00:15:27.814 "adrfam": "IPv4", 00:15:27.814 "traddr": "10.0.0.2", 00:15:27.814 "trsvcid": "4420" 00:15:27.814 }, 00:15:27.814 "peer_address": { 00:15:27.814 "trtype": "TCP", 00:15:27.814 "adrfam": "IPv4", 00:15:27.814 "traddr": "10.0.0.1", 00:15:27.815 "trsvcid": "36656" 00:15:27.815 }, 00:15:27.815 "auth": { 00:15:27.815 "state": "completed", 00:15:27.815 "digest": "sha256", 00:15:27.815 "dhgroup": "ffdhe8192" 00:15:27.815 } 00:15:27.815 } 00:15:27.815 ]' 00:15:27.815 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.815 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.815 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.072 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.072 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.072 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.072 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.072 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.397 17:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.329 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.587 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.845 00:15:29.845 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.845 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.845 17:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.103 { 00:15:30.103 "cntlid": 49, 00:15:30.103 "qid": 0, 00:15:30.103 "state": "enabled", 00:15:30.103 "thread": "nvmf_tgt_poll_group_000", 00:15:30.103 "listen_address": { 00:15:30.103 "trtype": "TCP", 00:15:30.103 "adrfam": "IPv4", 00:15:30.103 "traddr": "10.0.0.2", 00:15:30.103 "trsvcid": "4420" 00:15:30.103 }, 00:15:30.103 "peer_address": { 00:15:30.103 "trtype": "TCP", 00:15:30.103 "adrfam": "IPv4", 00:15:30.103 "traddr": "10.0.0.1", 00:15:30.103 "trsvcid": "38434" 00:15:30.103 }, 00:15:30.103 "auth": { 00:15:30.103 "state": "completed", 00:15:30.103 "digest": "sha384", 00:15:30.103 "dhgroup": "null" 00:15:30.103 } 00:15:30.103 } 00:15:30.103 ]' 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.103 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.364 17:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:31.297 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.297 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.297 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.297 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.554 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.555 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.555 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.555 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.812 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.813 17:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.071 00:15:32.071 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.071 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.071 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.329 { 00:15:32.329 "cntlid": 51, 00:15:32.329 "qid": 0, 00:15:32.329 "state": "enabled", 00:15:32.329 "thread": "nvmf_tgt_poll_group_000", 00:15:32.329 "listen_address": { 00:15:32.329 "trtype": "TCP", 00:15:32.329 "adrfam": "IPv4", 00:15:32.329 "traddr": "10.0.0.2", 00:15:32.329 "trsvcid": "4420" 00:15:32.329 }, 00:15:32.329 "peer_address": { 00:15:32.329 "trtype": "TCP", 00:15:32.329 "adrfam": "IPv4", 00:15:32.329 "traddr": "10.0.0.1", 00:15:32.329 "trsvcid": "38448" 00:15:32.329 }, 00:15:32.329 "auth": { 00:15:32.329 "state": "completed", 00:15:32.329 "digest": "sha384", 00:15:32.329 "dhgroup": "null" 00:15:32.329 } 00:15:32.329 } 00:15:32.329 ]' 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.329 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.587 17:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.545 17:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.803 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.060 00:15:34.319 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.319 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.319 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.577 { 00:15:34.577 "cntlid": 53, 00:15:34.577 "qid": 0, 00:15:34.577 "state": "enabled", 00:15:34.577 "thread": "nvmf_tgt_poll_group_000", 00:15:34.577 "listen_address": { 00:15:34.577 "trtype": "TCP", 00:15:34.577 "adrfam": "IPv4", 00:15:34.577 "traddr": "10.0.0.2", 00:15:34.577 "trsvcid": "4420" 00:15:34.577 }, 00:15:34.577 "peer_address": { 00:15:34.577 "trtype": "TCP", 00:15:34.577 "adrfam": "IPv4", 00:15:34.577 "traddr": "10.0.0.1", 00:15:34.577 "trsvcid": "38476" 00:15:34.577 }, 00:15:34.577 "auth": { 00:15:34.577 "state": "completed", 00:15:34.577 "digest": "sha384", 00:15:34.577 "dhgroup": "null" 00:15:34.577 } 00:15:34.577 } 00:15:34.577 ]' 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.577 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.835 17:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.768 17:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.026 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:36.592 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.592 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.849 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.849 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.849 { 00:15:36.849 "cntlid": 55, 00:15:36.849 "qid": 0, 00:15:36.849 "state": "enabled", 00:15:36.849 "thread": "nvmf_tgt_poll_group_000", 00:15:36.849 "listen_address": { 00:15:36.849 "trtype": "TCP", 00:15:36.849 "adrfam": "IPv4", 00:15:36.849 "traddr": "10.0.0.2", 00:15:36.849 "trsvcid": "4420" 00:15:36.849 }, 00:15:36.849 "peer_address": { 00:15:36.849 "trtype": "TCP", 00:15:36.849 "adrfam": "IPv4", 00:15:36.849 "traddr": "10.0.0.1", 00:15:36.849 "trsvcid": "38498" 00:15:36.849 }, 00:15:36.849 "auth": { 00:15:36.849 "state": "completed", 00:15:36.849 "digest": "sha384", 00:15:36.849 "dhgroup": "null" 00:15:36.849 } 00:15:36.849 } 00:15:36.849 ]' 00:15:36.849 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.849 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.850 17:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.107 17:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.038 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.296 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.861 00:15:38.861 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.861 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.861 17:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.119 { 00:15:39.119 "cntlid": 57, 00:15:39.119 "qid": 0, 00:15:39.119 "state": "enabled", 00:15:39.119 "thread": "nvmf_tgt_poll_group_000", 00:15:39.119 "listen_address": { 00:15:39.119 "trtype": "TCP", 00:15:39.119 "adrfam": "IPv4", 00:15:39.119 "traddr": "10.0.0.2", 00:15:39.119 "trsvcid": "4420" 00:15:39.119 }, 00:15:39.119 "peer_address": { 00:15:39.119 "trtype": "TCP", 00:15:39.119 "adrfam": "IPv4", 00:15:39.119 "traddr": "10.0.0.1", 00:15:39.119 "trsvcid": "38522" 00:15:39.119 }, 00:15:39.119 "auth": { 00:15:39.119 "state": "completed", 00:15:39.119 "digest": "sha384", 00:15:39.119 "dhgroup": "ffdhe2048" 00:15:39.119 } 00:15:39.119 } 00:15:39.119 ]' 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.119 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.376 17:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.320 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.577 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.578 17:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.143 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.143 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.143 { 00:15:41.143 "cntlid": 59, 00:15:41.143 "qid": 0, 00:15:41.143 "state": "enabled", 00:15:41.143 "thread": "nvmf_tgt_poll_group_000", 00:15:41.143 "listen_address": { 00:15:41.143 "trtype": "TCP", 00:15:41.143 "adrfam": "IPv4", 00:15:41.143 "traddr": "10.0.0.2", 00:15:41.143 "trsvcid": "4420" 00:15:41.143 }, 00:15:41.143 "peer_address": { 00:15:41.143 "trtype": "TCP", 00:15:41.143 "adrfam": "IPv4", 00:15:41.143 "traddr": "10.0.0.1", 00:15:41.143 "trsvcid": "40328" 00:15:41.143 }, 00:15:41.143 "auth": { 00:15:41.143 "state": "completed", 00:15:41.143 "digest": "sha384", 00:15:41.143 "dhgroup": "ffdhe2048" 00:15:41.143 } 00:15:41.143 } 00:15:41.143 ]' 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.402 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.660 17:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.592 17:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.850 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.417 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.417 { 00:15:43.417 "cntlid": 61, 00:15:43.417 "qid": 0, 00:15:43.417 "state": "enabled", 00:15:43.417 "thread": "nvmf_tgt_poll_group_000", 00:15:43.417 "listen_address": { 00:15:43.417 "trtype": "TCP", 00:15:43.417 "adrfam": "IPv4", 00:15:43.417 "traddr": "10.0.0.2", 00:15:43.417 "trsvcid": "4420" 00:15:43.417 }, 00:15:43.417 "peer_address": { 00:15:43.417 "trtype": "TCP", 00:15:43.417 "adrfam": "IPv4", 00:15:43.417 "traddr": "10.0.0.1", 00:15:43.417 "trsvcid": "40358" 00:15:43.417 }, 00:15:43.417 "auth": { 00:15:43.417 "state": "completed", 00:15:43.417 "digest": "sha384", 00:15:43.417 "dhgroup": "ffdhe2048" 00:15:43.417 } 00:15:43.417 } 00:15:43.417 ]' 00:15:43.417 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.675 17:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.933 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.866 17:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.124 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.125 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.382 00:15:45.382 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.382 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.383 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.641 { 00:15:45.641 "cntlid": 63, 00:15:45.641 "qid": 0, 00:15:45.641 "state": "enabled", 00:15:45.641 "thread": "nvmf_tgt_poll_group_000", 00:15:45.641 "listen_address": { 00:15:45.641 "trtype": "TCP", 00:15:45.641 "adrfam": "IPv4", 00:15:45.641 "traddr": "10.0.0.2", 00:15:45.641 "trsvcid": "4420" 00:15:45.641 }, 00:15:45.641 "peer_address": { 00:15:45.641 "trtype": "TCP", 00:15:45.641 "adrfam": "IPv4", 00:15:45.641 "traddr": "10.0.0.1", 00:15:45.641 "trsvcid": "40384" 00:15:45.641 }, 00:15:45.641 "auth": { 00:15:45.641 "state": "completed", 00:15:45.641 "digest": "sha384", 00:15:45.641 "dhgroup": "ffdhe2048" 00:15:45.641 } 00:15:45.641 } 00:15:45.641 ]' 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.641 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.899 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.899 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.899 17:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.156 17:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.090 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.346 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.602 00:15:47.602 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.602 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.602 17:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.859 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.859 { 00:15:47.860 "cntlid": 65, 00:15:47.860 "qid": 0, 00:15:47.860 "state": "enabled", 00:15:47.860 "thread": "nvmf_tgt_poll_group_000", 00:15:47.860 "listen_address": { 00:15:47.860 "trtype": "TCP", 00:15:47.860 "adrfam": "IPv4", 00:15:47.860 "traddr": "10.0.0.2", 00:15:47.860 "trsvcid": "4420" 00:15:47.860 }, 00:15:47.860 "peer_address": { 00:15:47.860 "trtype": "TCP", 00:15:47.860 "adrfam": "IPv4", 00:15:47.860 "traddr": "10.0.0.1", 00:15:47.860 "trsvcid": "40414" 00:15:47.860 }, 00:15:47.860 "auth": { 00:15:47.860 "state": "completed", 00:15:47.860 "digest": "sha384", 00:15:47.860 "dhgroup": "ffdhe3072" 00:15:47.860 } 00:15:47.860 } 00:15:47.860 ]' 00:15:47.860 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.117 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.374 17:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.306 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.564 17:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.129 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.129 { 00:15:50.129 "cntlid": 67, 00:15:50.129 "qid": 0, 00:15:50.129 "state": "enabled", 00:15:50.129 "thread": "nvmf_tgt_poll_group_000", 00:15:50.129 "listen_address": { 00:15:50.129 "trtype": "TCP", 00:15:50.129 "adrfam": "IPv4", 00:15:50.129 "traddr": "10.0.0.2", 00:15:50.129 "trsvcid": "4420" 00:15:50.129 }, 00:15:50.129 "peer_address": { 00:15:50.129 "trtype": "TCP", 00:15:50.129 "adrfam": "IPv4", 00:15:50.129 "traddr": "10.0.0.1", 00:15:50.129 "trsvcid": "46248" 00:15:50.129 }, 00:15:50.129 "auth": { 00:15:50.129 "state": "completed", 00:15:50.129 "digest": "sha384", 00:15:50.129 "dhgroup": "ffdhe3072" 00:15:50.129 } 00:15:50.129 } 00:15:50.129 ]' 00:15:50.129 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.392 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.691 17:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.625 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.626 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.883 17:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.141 00:15:52.141 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.141 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.141 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.398 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.398 { 00:15:52.398 "cntlid": 69, 00:15:52.398 "qid": 0, 00:15:52.398 "state": "enabled", 00:15:52.398 "thread": "nvmf_tgt_poll_group_000", 00:15:52.398 "listen_address": { 00:15:52.398 "trtype": "TCP", 00:15:52.398 "adrfam": "IPv4", 00:15:52.398 "traddr": "10.0.0.2", 00:15:52.398 "trsvcid": "4420" 00:15:52.398 }, 00:15:52.398 "peer_address": { 00:15:52.398 "trtype": "TCP", 00:15:52.398 "adrfam": "IPv4", 00:15:52.398 "traddr": "10.0.0.1", 00:15:52.398 "trsvcid": "46272" 00:15:52.398 }, 00:15:52.398 "auth": { 00:15:52.398 "state": "completed", 00:15:52.398 "digest": "sha384", 00:15:52.398 "dhgroup": "ffdhe3072" 00:15:52.399 } 00:15:52.399 } 00:15:52.399 ]' 00:15:52.399 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.655 17:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.911 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.843 17:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.100 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.101 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.359 00:15:54.359 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.359 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.359 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.617 { 00:15:54.617 "cntlid": 71, 00:15:54.617 "qid": 0, 00:15:54.617 "state": "enabled", 00:15:54.617 "thread": "nvmf_tgt_poll_group_000", 00:15:54.617 "listen_address": { 00:15:54.617 "trtype": "TCP", 00:15:54.617 "adrfam": "IPv4", 00:15:54.617 "traddr": "10.0.0.2", 00:15:54.617 "trsvcid": "4420" 00:15:54.617 }, 00:15:54.617 "peer_address": { 00:15:54.617 "trtype": "TCP", 00:15:54.617 "adrfam": "IPv4", 00:15:54.617 "traddr": "10.0.0.1", 00:15:54.617 "trsvcid": "46294" 00:15:54.617 }, 00:15:54.617 "auth": { 00:15:54.617 "state": "completed", 00:15:54.617 "digest": "sha384", 00:15:54.617 "dhgroup": "ffdhe3072" 00:15:54.617 } 00:15:54.617 } 00:15:54.617 ]' 00:15:54.617 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.875 17:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.133 17:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.066 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.324 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.582 00:15:56.582 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.582 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.582 17:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.840 { 00:15:56.840 "cntlid": 73, 00:15:56.840 "qid": 0, 00:15:56.840 "state": "enabled", 00:15:56.840 "thread": "nvmf_tgt_poll_group_000", 00:15:56.840 "listen_address": { 00:15:56.840 "trtype": "TCP", 00:15:56.840 "adrfam": "IPv4", 00:15:56.840 "traddr": "10.0.0.2", 00:15:56.840 "trsvcid": "4420" 00:15:56.840 }, 00:15:56.840 "peer_address": { 00:15:56.840 "trtype": "TCP", 00:15:56.840 "adrfam": "IPv4", 00:15:56.840 "traddr": "10.0.0.1", 00:15:56.840 "trsvcid": "46320" 00:15:56.840 }, 00:15:56.840 "auth": { 00:15:56.840 "state": "completed", 00:15:56.840 "digest": "sha384", 00:15:56.840 "dhgroup": "ffdhe4096" 00:15:56.840 } 00:15:56.840 } 00:15:56.840 ]' 00:15:56.840 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.096 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.354 17:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.286 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.544 17:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.801 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.059 { 00:15:59.059 "cntlid": 75, 00:15:59.059 "qid": 0, 00:15:59.059 "state": "enabled", 00:15:59.059 "thread": "nvmf_tgt_poll_group_000", 00:15:59.059 "listen_address": { 00:15:59.059 "trtype": "TCP", 00:15:59.059 "adrfam": "IPv4", 00:15:59.059 "traddr": "10.0.0.2", 00:15:59.059 "trsvcid": "4420" 00:15:59.059 }, 00:15:59.059 "peer_address": { 00:15:59.059 "trtype": "TCP", 00:15:59.059 "adrfam": "IPv4", 00:15:59.059 "traddr": "10.0.0.1", 00:15:59.059 "trsvcid": "46346" 00:15:59.059 }, 00:15:59.059 "auth": { 00:15:59.059 "state": "completed", 00:15:59.059 "digest": "sha384", 00:15:59.059 "dhgroup": "ffdhe4096" 00:15:59.059 } 00:15:59.059 } 00:15:59.059 ]' 00:15:59.059 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.317 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.574 17:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.506 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.764 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.765 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.765 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.765 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.765 17:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.023 00:16:01.023 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.023 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.023 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.281 { 00:16:01.281 "cntlid": 77, 00:16:01.281 "qid": 0, 00:16:01.281 "state": "enabled", 00:16:01.281 "thread": "nvmf_tgt_poll_group_000", 00:16:01.281 "listen_address": { 00:16:01.281 "trtype": "TCP", 00:16:01.281 "adrfam": "IPv4", 00:16:01.281 "traddr": "10.0.0.2", 00:16:01.281 "trsvcid": "4420" 00:16:01.281 }, 00:16:01.281 "peer_address": { 00:16:01.281 "trtype": "TCP", 00:16:01.281 "adrfam": "IPv4", 00:16:01.281 "traddr": "10.0.0.1", 00:16:01.281 "trsvcid": "38444" 00:16:01.281 }, 00:16:01.281 "auth": { 00:16:01.281 "state": "completed", 00:16:01.281 "digest": "sha384", 00:16:01.281 "dhgroup": "ffdhe4096" 00:16:01.281 } 00:16:01.281 } 00:16:01.281 ]' 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.281 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.539 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.539 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.539 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.797 17:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.730 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.731 17:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.989 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.247 00:16:03.247 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.247 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.247 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.505 { 00:16:03.505 "cntlid": 79, 00:16:03.505 "qid": 0, 00:16:03.505 "state": "enabled", 00:16:03.505 "thread": "nvmf_tgt_poll_group_000", 00:16:03.505 "listen_address": { 00:16:03.505 "trtype": "TCP", 00:16:03.505 "adrfam": "IPv4", 00:16:03.505 "traddr": "10.0.0.2", 00:16:03.505 "trsvcid": "4420" 00:16:03.505 }, 00:16:03.505 "peer_address": { 00:16:03.505 "trtype": "TCP", 00:16:03.505 "adrfam": "IPv4", 00:16:03.505 "traddr": "10.0.0.1", 00:16:03.505 "trsvcid": "38474" 00:16:03.505 }, 00:16:03.505 "auth": { 00:16:03.505 "state": "completed", 00:16:03.505 "digest": "sha384", 00:16:03.505 "dhgroup": "ffdhe4096" 00:16:03.505 } 00:16:03.505 } 00:16:03.505 ]' 00:16:03.505 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.763 17:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.020 17:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.953 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.210 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:05.210 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.210 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.210 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:05.210 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.211 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.776 00:16:05.776 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.776 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.776 17:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.034 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.035 { 00:16:06.035 "cntlid": 81, 00:16:06.035 "qid": 0, 00:16:06.035 "state": "enabled", 00:16:06.035 "thread": "nvmf_tgt_poll_group_000", 00:16:06.035 "listen_address": { 00:16:06.035 "trtype": "TCP", 00:16:06.035 "adrfam": "IPv4", 00:16:06.035 "traddr": "10.0.0.2", 00:16:06.035 "trsvcid": "4420" 00:16:06.035 }, 00:16:06.035 "peer_address": { 00:16:06.035 "trtype": "TCP", 00:16:06.035 "adrfam": "IPv4", 00:16:06.035 "traddr": "10.0.0.1", 00:16:06.035 "trsvcid": "38492" 00:16:06.035 }, 00:16:06.035 "auth": { 00:16:06.035 "state": "completed", 00:16:06.035 "digest": "sha384", 00:16:06.035 "dhgroup": "ffdhe6144" 00:16:06.035 } 00:16:06.035 } 00:16:06.035 ]' 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.035 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.292 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.292 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.292 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.293 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.293 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.550 17:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.484 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.745 17:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.354 00:16:08.354 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.354 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.354 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.612 { 00:16:08.612 "cntlid": 83, 00:16:08.612 "qid": 0, 00:16:08.612 "state": "enabled", 00:16:08.612 "thread": "nvmf_tgt_poll_group_000", 00:16:08.612 "listen_address": { 00:16:08.612 "trtype": "TCP", 00:16:08.612 "adrfam": "IPv4", 00:16:08.612 "traddr": "10.0.0.2", 00:16:08.612 "trsvcid": "4420" 00:16:08.612 }, 00:16:08.612 "peer_address": { 00:16:08.612 "trtype": "TCP", 00:16:08.612 "adrfam": "IPv4", 00:16:08.612 "traddr": "10.0.0.1", 00:16:08.612 "trsvcid": "38536" 00:16:08.612 }, 00:16:08.612 "auth": { 00:16:08.612 "state": "completed", 00:16:08.612 "digest": "sha384", 00:16:08.612 "dhgroup": "ffdhe6144" 00:16:08.612 } 00:16:08.612 } 00:16:08.612 ]' 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.612 17:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.870 17:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.802 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.060 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.317 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.317 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.317 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.882 00:16:10.882 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.882 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.882 17:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.882 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.882 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.882 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.882 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.139 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.139 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.139 { 00:16:11.139 "cntlid": 85, 00:16:11.139 "qid": 0, 00:16:11.139 "state": "enabled", 00:16:11.139 "thread": "nvmf_tgt_poll_group_000", 00:16:11.139 "listen_address": { 00:16:11.139 "trtype": "TCP", 00:16:11.139 "adrfam": "IPv4", 00:16:11.139 "traddr": "10.0.0.2", 00:16:11.139 "trsvcid": "4420" 00:16:11.139 }, 00:16:11.139 "peer_address": { 00:16:11.139 "trtype": "TCP", 00:16:11.139 "adrfam": "IPv4", 00:16:11.139 "traddr": "10.0.0.1", 00:16:11.139 "trsvcid": "45444" 00:16:11.139 }, 00:16:11.139 "auth": { 00:16:11.139 "state": "completed", 00:16:11.139 "digest": "sha384", 00:16:11.139 "dhgroup": "ffdhe6144" 00:16:11.139 } 00:16:11.139 } 00:16:11.139 ]' 00:16:11.139 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.139 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.140 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.397 17:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.331 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.589 17:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.154 00:16:13.154 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.154 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.155 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.412 { 00:16:13.412 "cntlid": 87, 00:16:13.412 "qid": 0, 00:16:13.412 "state": "enabled", 00:16:13.412 "thread": "nvmf_tgt_poll_group_000", 00:16:13.412 "listen_address": { 00:16:13.412 "trtype": "TCP", 00:16:13.412 "adrfam": "IPv4", 00:16:13.412 "traddr": "10.0.0.2", 00:16:13.412 "trsvcid": "4420" 00:16:13.412 }, 00:16:13.412 "peer_address": { 00:16:13.412 "trtype": "TCP", 00:16:13.412 "adrfam": "IPv4", 00:16:13.412 "traddr": "10.0.0.1", 00:16:13.412 "trsvcid": "45474" 00:16:13.412 }, 00:16:13.412 "auth": { 00:16:13.412 "state": "completed", 00:16:13.412 "digest": "sha384", 00:16:13.412 "dhgroup": "ffdhe6144" 00:16:13.412 } 00:16:13.412 } 00:16:13.412 ]' 00:16:13.412 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.672 17:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.931 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:14.866 17:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.123 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.124 17:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.057 00:16:16.057 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.057 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.057 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.315 { 00:16:16.315 "cntlid": 89, 00:16:16.315 "qid": 0, 00:16:16.315 "state": "enabled", 00:16:16.315 "thread": "nvmf_tgt_poll_group_000", 00:16:16.315 "listen_address": { 00:16:16.315 "trtype": "TCP", 00:16:16.315 "adrfam": "IPv4", 00:16:16.315 "traddr": "10.0.0.2", 00:16:16.315 "trsvcid": "4420" 00:16:16.315 }, 00:16:16.315 "peer_address": { 00:16:16.315 "trtype": "TCP", 00:16:16.315 "adrfam": "IPv4", 00:16:16.315 "traddr": "10.0.0.1", 00:16:16.315 "trsvcid": "45496" 00:16:16.315 }, 00:16:16.315 "auth": { 00:16:16.315 "state": "completed", 00:16:16.315 "digest": "sha384", 00:16:16.315 "dhgroup": "ffdhe8192" 00:16:16.315 } 00:16:16.315 } 00:16:16.315 ]' 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.315 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.573 17:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.506 17:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.072 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.638 00:16:18.638 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.638 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.638 17:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.896 { 00:16:18.896 "cntlid": 91, 00:16:18.896 "qid": 0, 00:16:18.896 "state": "enabled", 00:16:18.896 "thread": "nvmf_tgt_poll_group_000", 00:16:18.896 "listen_address": { 00:16:18.896 "trtype": "TCP", 00:16:18.896 "adrfam": "IPv4", 00:16:18.896 "traddr": "10.0.0.2", 00:16:18.896 "trsvcid": "4420" 00:16:18.896 }, 00:16:18.896 "peer_address": { 00:16:18.896 "trtype": "TCP", 00:16:18.896 "adrfam": "IPv4", 00:16:18.896 "traddr": "10.0.0.1", 00:16:18.896 "trsvcid": "45528" 00:16:18.896 }, 00:16:18.896 "auth": { 00:16:18.896 "state": "completed", 00:16:18.896 "digest": "sha384", 00:16:18.896 "dhgroup": "ffdhe8192" 00:16:18.896 } 00:16:18.896 } 00:16:18.896 ]' 00:16:18.896 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.154 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.411 17:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.345 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.603 17:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.536 00:16:21.536 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.536 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.536 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.794 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.794 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.794 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.795 { 00:16:21.795 "cntlid": 93, 00:16:21.795 "qid": 0, 00:16:21.795 "state": "enabled", 00:16:21.795 "thread": "nvmf_tgt_poll_group_000", 00:16:21.795 "listen_address": { 00:16:21.795 "trtype": "TCP", 00:16:21.795 "adrfam": "IPv4", 00:16:21.795 "traddr": "10.0.0.2", 00:16:21.795 "trsvcid": "4420" 00:16:21.795 }, 00:16:21.795 "peer_address": { 00:16:21.795 "trtype": "TCP", 00:16:21.795 "adrfam": "IPv4", 00:16:21.795 "traddr": "10.0.0.1", 00:16:21.795 "trsvcid": "38980" 00:16:21.795 }, 00:16:21.795 "auth": { 00:16:21.795 "state": "completed", 00:16:21.795 "digest": "sha384", 00:16:21.795 "dhgroup": "ffdhe8192" 00:16:21.795 } 00:16:21.795 } 00:16:21.795 ]' 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.795 17:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.052 17:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:23.426 17:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.360 00:16:24.360 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.360 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.360 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.618 { 00:16:24.618 "cntlid": 95, 00:16:24.618 "qid": 0, 00:16:24.618 "state": "enabled", 00:16:24.618 "thread": "nvmf_tgt_poll_group_000", 00:16:24.618 "listen_address": { 00:16:24.618 "trtype": "TCP", 00:16:24.618 "adrfam": "IPv4", 00:16:24.618 "traddr": "10.0.0.2", 00:16:24.618 "trsvcid": "4420" 00:16:24.618 }, 00:16:24.618 "peer_address": { 00:16:24.618 "trtype": "TCP", 00:16:24.618 "adrfam": "IPv4", 00:16:24.618 "traddr": "10.0.0.1", 00:16:24.618 "trsvcid": "39016" 00:16:24.618 }, 00:16:24.618 "auth": { 00:16:24.618 "state": "completed", 00:16:24.618 "digest": "sha384", 00:16:24.618 "dhgroup": "ffdhe8192" 00:16:24.618 } 00:16:24.618 } 00:16:24.618 ]' 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.618 17:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.876 17:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.818 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.107 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.364 00:16:26.364 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.364 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.364 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.622 { 00:16:26.622 "cntlid": 97, 00:16:26.622 "qid": 0, 00:16:26.622 "state": "enabled", 00:16:26.622 "thread": "nvmf_tgt_poll_group_000", 00:16:26.622 "listen_address": { 00:16:26.622 "trtype": "TCP", 00:16:26.622 "adrfam": "IPv4", 00:16:26.622 "traddr": "10.0.0.2", 00:16:26.622 "trsvcid": "4420" 00:16:26.622 }, 00:16:26.622 "peer_address": { 00:16:26.622 "trtype": "TCP", 00:16:26.622 "adrfam": "IPv4", 00:16:26.622 "traddr": "10.0.0.1", 00:16:26.622 "trsvcid": "39042" 00:16:26.622 }, 00:16:26.622 "auth": { 00:16:26.622 "state": "completed", 00:16:26.622 "digest": "sha512", 00:16:26.622 "dhgroup": "null" 00:16:26.622 } 00:16:26.622 } 00:16:26.622 ]' 00:16:26.622 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.879 17:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.137 17:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.070 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.328 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.586 00:16:28.586 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.586 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.586 17:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.844 { 00:16:28.844 "cntlid": 99, 00:16:28.844 "qid": 0, 00:16:28.844 "state": "enabled", 00:16:28.844 "thread": "nvmf_tgt_poll_group_000", 00:16:28.844 "listen_address": { 00:16:28.844 "trtype": "TCP", 00:16:28.844 "adrfam": "IPv4", 00:16:28.844 "traddr": "10.0.0.2", 00:16:28.844 "trsvcid": "4420" 00:16:28.844 }, 00:16:28.844 "peer_address": { 00:16:28.844 "trtype": "TCP", 00:16:28.844 "adrfam": "IPv4", 00:16:28.844 "traddr": "10.0.0.1", 00:16:28.844 "trsvcid": "39070" 00:16:28.844 }, 00:16:28.844 "auth": { 00:16:28.844 "state": "completed", 00:16:28.844 "digest": "sha512", 00:16:28.844 "dhgroup": "null" 00:16:28.844 } 00:16:28.844 } 00:16:28.844 ]' 00:16:28.844 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.102 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.360 17:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.292 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.550 17:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.808 00:16:30.808 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.808 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.808 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.066 { 00:16:31.066 "cntlid": 101, 00:16:31.066 "qid": 0, 00:16:31.066 "state": "enabled", 00:16:31.066 "thread": "nvmf_tgt_poll_group_000", 00:16:31.066 "listen_address": { 00:16:31.066 "trtype": "TCP", 00:16:31.066 "adrfam": "IPv4", 00:16:31.066 "traddr": "10.0.0.2", 00:16:31.066 "trsvcid": "4420" 00:16:31.066 }, 00:16:31.066 "peer_address": { 00:16:31.066 "trtype": "TCP", 00:16:31.066 "adrfam": "IPv4", 00:16:31.066 "traddr": "10.0.0.1", 00:16:31.066 "trsvcid": "44078" 00:16:31.066 }, 00:16:31.066 "auth": { 00:16:31.066 "state": "completed", 00:16:31.066 "digest": "sha512", 00:16:31.066 "dhgroup": "null" 00:16:31.066 } 00:16:31.066 } 00:16:31.066 ]' 00:16:31.066 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.324 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.581 17:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.513 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.770 17:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.335 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.335 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.593 { 00:16:33.593 "cntlid": 103, 00:16:33.593 "qid": 0, 00:16:33.593 "state": "enabled", 00:16:33.593 "thread": "nvmf_tgt_poll_group_000", 00:16:33.593 "listen_address": { 00:16:33.593 "trtype": "TCP", 00:16:33.593 "adrfam": "IPv4", 00:16:33.593 "traddr": "10.0.0.2", 00:16:33.593 "trsvcid": "4420" 00:16:33.593 }, 00:16:33.593 "peer_address": { 00:16:33.593 "trtype": "TCP", 00:16:33.593 "adrfam": "IPv4", 00:16:33.593 "traddr": "10.0.0.1", 00:16:33.593 "trsvcid": "44108" 00:16:33.593 }, 00:16:33.593 "auth": { 00:16:33.593 "state": "completed", 00:16:33.593 "digest": "sha512", 00:16:33.593 "dhgroup": "null" 00:16:33.593 } 00:16:33.593 } 00:16:33.593 ]' 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.593 17:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.850 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.813 17:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.071 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.637 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.637 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.895 { 00:16:35.895 "cntlid": 105, 00:16:35.895 "qid": 0, 00:16:35.895 "state": "enabled", 00:16:35.895 "thread": "nvmf_tgt_poll_group_000", 00:16:35.895 "listen_address": { 00:16:35.895 "trtype": "TCP", 00:16:35.895 "adrfam": "IPv4", 00:16:35.895 "traddr": "10.0.0.2", 00:16:35.895 "trsvcid": "4420" 00:16:35.895 }, 00:16:35.895 "peer_address": { 00:16:35.895 "trtype": "TCP", 00:16:35.895 "adrfam": "IPv4", 00:16:35.895 "traddr": "10.0.0.1", 00:16:35.895 "trsvcid": "44126" 00:16:35.895 }, 00:16:35.895 "auth": { 00:16:35.895 "state": "completed", 00:16:35.895 "digest": "sha512", 00:16:35.895 "dhgroup": "ffdhe2048" 00:16:35.895 } 00:16:35.895 } 00:16:35.895 ]' 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.895 17:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.895 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.895 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.895 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.153 17:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.086 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.344 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.602 00:16:37.602 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.602 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.602 17:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.860 { 00:16:37.860 "cntlid": 107, 00:16:37.860 "qid": 0, 00:16:37.860 "state": "enabled", 00:16:37.860 "thread": "nvmf_tgt_poll_group_000", 00:16:37.860 "listen_address": { 00:16:37.860 "trtype": "TCP", 00:16:37.860 "adrfam": "IPv4", 00:16:37.860 "traddr": "10.0.0.2", 00:16:37.860 "trsvcid": "4420" 00:16:37.860 }, 00:16:37.860 "peer_address": { 00:16:37.860 "trtype": "TCP", 00:16:37.860 "adrfam": "IPv4", 00:16:37.860 "traddr": "10.0.0.1", 00:16:37.860 "trsvcid": "44150" 00:16:37.860 }, 00:16:37.860 "auth": { 00:16:37.860 "state": "completed", 00:16:37.860 "digest": "sha512", 00:16:37.860 "dhgroup": "ffdhe2048" 00:16:37.860 } 00:16:37.860 } 00:16:37.860 ]' 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.860 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.118 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.118 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.118 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.118 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.118 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.376 17:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.314 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.572 17:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.829 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.087 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.345 { 00:16:40.345 "cntlid": 109, 00:16:40.345 "qid": 0, 00:16:40.345 "state": "enabled", 00:16:40.345 "thread": "nvmf_tgt_poll_group_000", 00:16:40.345 "listen_address": { 00:16:40.345 "trtype": "TCP", 00:16:40.345 "adrfam": "IPv4", 00:16:40.345 "traddr": "10.0.0.2", 00:16:40.345 "trsvcid": "4420" 00:16:40.345 }, 00:16:40.345 "peer_address": { 00:16:40.345 "trtype": "TCP", 00:16:40.345 "adrfam": "IPv4", 00:16:40.345 "traddr": "10.0.0.1", 00:16:40.345 "trsvcid": "51494" 00:16:40.345 }, 00:16:40.345 "auth": { 00:16:40.345 "state": "completed", 00:16:40.345 "digest": "sha512", 00:16:40.345 "dhgroup": "ffdhe2048" 00:16:40.345 } 00:16:40.345 } 00:16:40.345 ]' 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.345 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.602 17:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.535 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.793 17:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.052 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.310 { 00:16:42.310 "cntlid": 111, 00:16:42.310 "qid": 0, 00:16:42.310 "state": "enabled", 00:16:42.310 "thread": "nvmf_tgt_poll_group_000", 00:16:42.310 "listen_address": { 00:16:42.310 "trtype": "TCP", 00:16:42.310 "adrfam": "IPv4", 00:16:42.310 "traddr": "10.0.0.2", 00:16:42.310 "trsvcid": "4420" 00:16:42.310 }, 00:16:42.310 "peer_address": { 00:16:42.310 "trtype": "TCP", 00:16:42.310 "adrfam": "IPv4", 00:16:42.310 "traddr": "10.0.0.1", 00:16:42.310 "trsvcid": "51512" 00:16:42.310 }, 00:16:42.310 "auth": { 00:16:42.310 "state": "completed", 00:16:42.310 "digest": "sha512", 00:16:42.310 "dhgroup": "ffdhe2048" 00:16:42.310 } 00:16:42.310 } 00:16:42.310 ]' 00:16:42.310 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.568 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.830 17:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.806 17:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.064 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.323 00:16:44.323 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.323 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.323 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.581 { 00:16:44.581 "cntlid": 113, 00:16:44.581 "qid": 0, 00:16:44.581 "state": "enabled", 00:16:44.581 "thread": "nvmf_tgt_poll_group_000", 00:16:44.581 "listen_address": { 00:16:44.581 "trtype": "TCP", 00:16:44.581 "adrfam": "IPv4", 00:16:44.581 "traddr": "10.0.0.2", 00:16:44.581 "trsvcid": "4420" 00:16:44.581 }, 00:16:44.581 "peer_address": { 00:16:44.581 "trtype": "TCP", 00:16:44.581 "adrfam": "IPv4", 00:16:44.581 "traddr": "10.0.0.1", 00:16:44.581 "trsvcid": "51536" 00:16:44.581 }, 00:16:44.581 "auth": { 00:16:44.581 "state": "completed", 00:16:44.581 "digest": "sha512", 00:16:44.581 "dhgroup": "ffdhe3072" 00:16:44.581 } 00:16:44.581 } 00:16:44.581 ]' 00:16:44.581 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.839 17:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.096 17:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.029 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.287 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.853 00:16:46.853 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.853 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.853 17:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.111 { 00:16:47.111 "cntlid": 115, 00:16:47.111 "qid": 0, 00:16:47.111 "state": "enabled", 00:16:47.111 "thread": "nvmf_tgt_poll_group_000", 00:16:47.111 "listen_address": { 00:16:47.111 "trtype": "TCP", 00:16:47.111 "adrfam": "IPv4", 00:16:47.111 "traddr": "10.0.0.2", 00:16:47.111 "trsvcid": "4420" 00:16:47.111 }, 00:16:47.111 "peer_address": { 00:16:47.111 "trtype": "TCP", 00:16:47.111 "adrfam": "IPv4", 00:16:47.111 "traddr": "10.0.0.1", 00:16:47.111 "trsvcid": "51566" 00:16:47.111 }, 00:16:47.111 "auth": { 00:16:47.111 "state": "completed", 00:16:47.111 "digest": "sha512", 00:16:47.111 "dhgroup": "ffdhe3072" 00:16:47.111 } 00:16:47.111 } 00:16:47.111 ]' 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.111 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.368 17:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.301 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.558 17:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.125 00:16:49.125 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.125 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.125 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.383 { 00:16:49.383 "cntlid": 117, 00:16:49.383 "qid": 0, 00:16:49.383 "state": "enabled", 00:16:49.383 "thread": "nvmf_tgt_poll_group_000", 00:16:49.383 "listen_address": { 00:16:49.383 "trtype": "TCP", 00:16:49.383 "adrfam": "IPv4", 00:16:49.383 "traddr": "10.0.0.2", 00:16:49.383 "trsvcid": "4420" 00:16:49.383 }, 00:16:49.383 "peer_address": { 00:16:49.383 "trtype": "TCP", 00:16:49.383 "adrfam": "IPv4", 00:16:49.383 "traddr": "10.0.0.1", 00:16:49.383 "trsvcid": "51596" 00:16:49.383 }, 00:16:49.383 "auth": { 00:16:49.383 "state": "completed", 00:16:49.383 "digest": "sha512", 00:16:49.383 "dhgroup": "ffdhe3072" 00:16:49.383 } 00:16:49.383 } 00:16:49.383 ]' 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.383 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.641 17:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.574 17:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.832 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.397 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.397 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.655 { 00:16:51.655 "cntlid": 119, 00:16:51.655 "qid": 0, 00:16:51.655 "state": "enabled", 00:16:51.655 "thread": "nvmf_tgt_poll_group_000", 00:16:51.655 "listen_address": { 00:16:51.655 "trtype": "TCP", 00:16:51.655 "adrfam": "IPv4", 00:16:51.655 "traddr": "10.0.0.2", 00:16:51.655 "trsvcid": "4420" 00:16:51.655 }, 00:16:51.655 "peer_address": { 00:16:51.655 "trtype": "TCP", 00:16:51.655 "adrfam": "IPv4", 00:16:51.655 "traddr": "10.0.0.1", 00:16:51.655 "trsvcid": "53208" 00:16:51.655 }, 00:16:51.655 "auth": { 00:16:51.655 "state": "completed", 00:16:51.655 "digest": "sha512", 00:16:51.655 "dhgroup": "ffdhe3072" 00:16:51.655 } 00:16:51.655 } 00:16:51.655 ]' 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.655 17:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.913 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:52.846 17:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.105 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.670 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.670 { 00:16:53.670 "cntlid": 121, 00:16:53.670 "qid": 0, 00:16:53.670 "state": "enabled", 00:16:53.670 "thread": "nvmf_tgt_poll_group_000", 00:16:53.670 "listen_address": { 00:16:53.670 "trtype": "TCP", 00:16:53.670 "adrfam": "IPv4", 00:16:53.670 "traddr": "10.0.0.2", 00:16:53.670 "trsvcid": "4420" 00:16:53.670 }, 00:16:53.670 "peer_address": { 00:16:53.670 "trtype": "TCP", 00:16:53.670 "adrfam": "IPv4", 00:16:53.670 "traddr": "10.0.0.1", 00:16:53.670 "trsvcid": "53232" 00:16:53.670 }, 00:16:53.670 "auth": { 00:16:53.670 "state": "completed", 00:16:53.670 "digest": "sha512", 00:16:53.670 "dhgroup": "ffdhe4096" 00:16:53.670 } 00:16:53.670 } 00:16:53.670 ]' 00:16:53.670 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.928 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.928 17:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.928 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.928 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.928 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.928 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.928 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.186 17:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.119 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.377 17:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.942 00:16:55.943 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.943 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.943 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.200 { 00:16:56.200 "cntlid": 123, 00:16:56.200 "qid": 0, 00:16:56.200 "state": "enabled", 00:16:56.200 "thread": "nvmf_tgt_poll_group_000", 00:16:56.200 "listen_address": { 00:16:56.200 "trtype": "TCP", 00:16:56.200 "adrfam": "IPv4", 00:16:56.200 "traddr": "10.0.0.2", 00:16:56.200 "trsvcid": "4420" 00:16:56.200 }, 00:16:56.200 "peer_address": { 00:16:56.200 "trtype": "TCP", 00:16:56.200 "adrfam": "IPv4", 00:16:56.200 "traddr": "10.0.0.1", 00:16:56.200 "trsvcid": "53276" 00:16:56.200 }, 00:16:56.200 "auth": { 00:16:56.200 "state": "completed", 00:16:56.200 "digest": "sha512", 00:16:56.200 "dhgroup": "ffdhe4096" 00:16:56.200 } 00:16:56.200 } 00:16:56.200 ]' 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.200 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.458 17:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.390 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.647 17:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.212 00:16:58.212 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.212 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.212 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.470 { 00:16:58.470 "cntlid": 125, 00:16:58.470 "qid": 0, 00:16:58.470 "state": "enabled", 00:16:58.470 "thread": "nvmf_tgt_poll_group_000", 00:16:58.470 "listen_address": { 00:16:58.470 "trtype": "TCP", 00:16:58.470 "adrfam": "IPv4", 00:16:58.470 "traddr": "10.0.0.2", 00:16:58.470 "trsvcid": "4420" 00:16:58.470 }, 00:16:58.470 "peer_address": { 00:16:58.470 "trtype": "TCP", 00:16:58.470 "adrfam": "IPv4", 00:16:58.470 "traddr": "10.0.0.1", 00:16:58.470 "trsvcid": "53308" 00:16:58.470 }, 00:16:58.470 "auth": { 00:16:58.470 "state": "completed", 00:16:58.470 "digest": "sha512", 00:16:58.470 "dhgroup": "ffdhe4096" 00:16:58.470 } 00:16:58.470 } 00:16:58.470 ]' 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.470 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.727 17:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.660 17:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.918 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.919 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.482 00:17:00.482 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.482 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.482 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.772 { 00:17:00.772 "cntlid": 127, 00:17:00.772 "qid": 0, 00:17:00.772 "state": "enabled", 00:17:00.772 "thread": "nvmf_tgt_poll_group_000", 00:17:00.772 "listen_address": { 00:17:00.772 "trtype": "TCP", 00:17:00.772 "adrfam": "IPv4", 00:17:00.772 "traddr": "10.0.0.2", 00:17:00.772 "trsvcid": "4420" 00:17:00.772 }, 00:17:00.772 "peer_address": { 00:17:00.772 "trtype": "TCP", 00:17:00.772 "adrfam": "IPv4", 00:17:00.772 "traddr": "10.0.0.1", 00:17:00.772 "trsvcid": "52392" 00:17:00.772 }, 00:17:00.772 "auth": { 00:17:00.772 "state": "completed", 00:17:00.772 "digest": "sha512", 00:17:00.772 "dhgroup": "ffdhe4096" 00:17:00.772 } 00:17:00.772 } 00:17:00.772 ]' 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.772 17:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.035 17:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.406 17:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.969 00:17:02.969 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.969 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.969 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.226 { 00:17:03.226 "cntlid": 129, 00:17:03.226 "qid": 0, 00:17:03.226 "state": "enabled", 00:17:03.226 "thread": "nvmf_tgt_poll_group_000", 00:17:03.226 "listen_address": { 00:17:03.226 "trtype": "TCP", 00:17:03.226 "adrfam": "IPv4", 00:17:03.226 "traddr": "10.0.0.2", 00:17:03.226 "trsvcid": "4420" 00:17:03.226 }, 00:17:03.226 "peer_address": { 00:17:03.226 "trtype": "TCP", 00:17:03.226 "adrfam": "IPv4", 00:17:03.226 "traddr": "10.0.0.1", 00:17:03.226 "trsvcid": "52416" 00:17:03.226 }, 00:17:03.226 "auth": { 00:17:03.226 "state": "completed", 00:17:03.226 "digest": "sha512", 00:17:03.226 "dhgroup": "ffdhe6144" 00:17:03.226 } 00:17:03.226 } 00:17:03.226 ]' 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.226 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.790 17:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.721 17:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.979 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.543 00:17:05.543 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.543 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.543 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.800 { 00:17:05.800 "cntlid": 131, 00:17:05.800 "qid": 0, 00:17:05.800 "state": "enabled", 00:17:05.800 "thread": "nvmf_tgt_poll_group_000", 00:17:05.800 "listen_address": { 00:17:05.800 "trtype": "TCP", 00:17:05.800 "adrfam": "IPv4", 00:17:05.800 "traddr": "10.0.0.2", 00:17:05.800 "trsvcid": "4420" 00:17:05.800 }, 00:17:05.800 "peer_address": { 00:17:05.800 "trtype": "TCP", 00:17:05.800 "adrfam": "IPv4", 00:17:05.800 "traddr": "10.0.0.1", 00:17:05.800 "trsvcid": "52450" 00:17:05.800 }, 00:17:05.800 "auth": { 00:17:05.800 "state": "completed", 00:17:05.800 "digest": "sha512", 00:17:05.800 "dhgroup": "ffdhe6144" 00:17:05.800 } 00:17:05.800 } 00:17:05.800 ]' 00:17:05.800 17:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.800 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.800 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.057 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.057 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.057 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.057 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.057 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.315 17:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.246 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.503 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.504 17:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.069 00:17:08.069 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.069 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.069 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.327 { 00:17:08.327 "cntlid": 133, 00:17:08.327 "qid": 0, 00:17:08.327 "state": "enabled", 00:17:08.327 "thread": "nvmf_tgt_poll_group_000", 00:17:08.327 "listen_address": { 00:17:08.327 "trtype": "TCP", 00:17:08.327 "adrfam": "IPv4", 00:17:08.327 "traddr": "10.0.0.2", 00:17:08.327 "trsvcid": "4420" 00:17:08.327 }, 00:17:08.327 "peer_address": { 00:17:08.327 "trtype": "TCP", 00:17:08.327 "adrfam": "IPv4", 00:17:08.327 "traddr": "10.0.0.1", 00:17:08.327 "trsvcid": "52490" 00:17:08.327 }, 00:17:08.327 "auth": { 00:17:08.327 "state": "completed", 00:17:08.327 "digest": "sha512", 00:17:08.327 "dhgroup": "ffdhe6144" 00:17:08.327 } 00:17:08.327 } 00:17:08.327 ]' 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.327 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.585 17:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:17:09.517 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.517 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.517 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.517 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.775 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.775 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.775 17:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.775 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.340 00:17:10.340 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.340 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.340 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.598 { 00:17:10.598 "cntlid": 135, 00:17:10.598 "qid": 0, 00:17:10.598 "state": "enabled", 00:17:10.598 "thread": "nvmf_tgt_poll_group_000", 00:17:10.598 "listen_address": { 00:17:10.598 "trtype": "TCP", 00:17:10.598 "adrfam": "IPv4", 00:17:10.598 "traddr": "10.0.0.2", 00:17:10.598 "trsvcid": "4420" 00:17:10.598 }, 00:17:10.598 "peer_address": { 00:17:10.598 "trtype": "TCP", 00:17:10.598 "adrfam": "IPv4", 00:17:10.598 "traddr": "10.0.0.1", 00:17:10.598 "trsvcid": "57378" 00:17:10.598 }, 00:17:10.598 "auth": { 00:17:10.598 "state": "completed", 00:17:10.598 "digest": "sha512", 00:17:10.598 "dhgroup": "ffdhe6144" 00:17:10.598 } 00:17:10.598 } 00:17:10.598 ]' 00:17:10.598 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.856 17:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.114 17:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.046 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.304 17:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.239 00:17:13.239 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.239 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.239 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.497 { 00:17:13.497 "cntlid": 137, 00:17:13.497 "qid": 0, 00:17:13.497 "state": "enabled", 00:17:13.497 "thread": "nvmf_tgt_poll_group_000", 00:17:13.497 "listen_address": { 00:17:13.497 "trtype": "TCP", 00:17:13.497 "adrfam": "IPv4", 00:17:13.497 "traddr": "10.0.0.2", 00:17:13.497 "trsvcid": "4420" 00:17:13.497 }, 00:17:13.497 "peer_address": { 00:17:13.497 "trtype": "TCP", 00:17:13.497 "adrfam": "IPv4", 00:17:13.497 "traddr": "10.0.0.1", 00:17:13.497 "trsvcid": "57402" 00:17:13.497 }, 00:17:13.497 "auth": { 00:17:13.497 "state": "completed", 00:17:13.497 "digest": "sha512", 00:17:13.497 "dhgroup": "ffdhe8192" 00:17:13.497 } 00:17:13.497 } 00:17:13.497 ]' 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.497 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.755 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.755 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.755 17:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.013 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:17:14.947 18:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.947 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.204 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:15.204 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.204 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.205 18:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.132 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.132 { 00:17:16.132 "cntlid": 139, 00:17:16.132 "qid": 0, 00:17:16.132 "state": "enabled", 00:17:16.132 "thread": "nvmf_tgt_poll_group_000", 00:17:16.132 "listen_address": { 00:17:16.132 "trtype": "TCP", 00:17:16.132 "adrfam": "IPv4", 00:17:16.132 "traddr": "10.0.0.2", 00:17:16.132 "trsvcid": "4420" 00:17:16.132 }, 00:17:16.132 "peer_address": { 00:17:16.132 "trtype": "TCP", 00:17:16.132 "adrfam": "IPv4", 00:17:16.132 "traddr": "10.0.0.1", 00:17:16.132 "trsvcid": "57430" 00:17:16.132 }, 00:17:16.132 "auth": { 00:17:16.132 "state": "completed", 00:17:16.132 "digest": "sha512", 00:17:16.132 "dhgroup": "ffdhe8192" 00:17:16.132 } 00:17:16.132 } 00:17:16.132 ]' 00:17:16.132 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.390 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.648 18:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:YTVjNjQyMDcxMTlmZjZjMDhhNGM5MzBiNTIxYTg3N2Ullevf: --dhchap-ctrl-secret DHHC-1:02:YTg3YmQzOWI2N2NmNGRkZjc1Y2MyNzJiMzhiZDFlMzQ2NDdjMDQzY2Q1ZmQyYWJiDoalQQ==: 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.579 18:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.907 18:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.839 00:17:18.839 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.839 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.839 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.131 { 00:17:19.131 "cntlid": 141, 00:17:19.131 "qid": 0, 00:17:19.131 "state": "enabled", 00:17:19.131 "thread": "nvmf_tgt_poll_group_000", 00:17:19.131 "listen_address": { 00:17:19.131 "trtype": "TCP", 00:17:19.131 "adrfam": "IPv4", 00:17:19.131 "traddr": "10.0.0.2", 00:17:19.131 "trsvcid": "4420" 00:17:19.131 }, 00:17:19.131 "peer_address": { 00:17:19.131 "trtype": "TCP", 00:17:19.131 "adrfam": "IPv4", 00:17:19.131 "traddr": "10.0.0.1", 00:17:19.131 "trsvcid": "57466" 00:17:19.131 }, 00:17:19.131 "auth": { 00:17:19.131 "state": "completed", 00:17:19.131 "digest": "sha512", 00:17:19.131 "dhgroup": "ffdhe8192" 00:17:19.131 } 00:17:19.131 } 00:17:19.131 ]' 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.131 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.388 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.388 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.388 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.388 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.388 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.647 18:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YWIxMDNmN2ExNDY4M2ZjMjNiMWMyNTI4ODQ5ZTU1ZGFhOGE3ZDQ3YmU3MTU3MjQ3t3W0iQ==: --dhchap-ctrl-secret DHHC-1:01:OTliNDI0MTYwMWY5OWQ2NzdjNmYxZjQyMDM2MDA1YTNnnDn3: 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.578 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.836 18:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.768 00:17:21.768 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.768 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.768 18:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.024 { 00:17:22.024 "cntlid": 143, 00:17:22.024 "qid": 0, 00:17:22.024 "state": "enabled", 00:17:22.024 "thread": "nvmf_tgt_poll_group_000", 00:17:22.024 "listen_address": { 00:17:22.024 "trtype": "TCP", 00:17:22.024 "adrfam": "IPv4", 00:17:22.024 "traddr": "10.0.0.2", 00:17:22.024 "trsvcid": "4420" 00:17:22.024 }, 00:17:22.024 "peer_address": { 00:17:22.024 "trtype": "TCP", 00:17:22.024 "adrfam": "IPv4", 00:17:22.024 "traddr": "10.0.0.1", 00:17:22.024 "trsvcid": "33364" 00:17:22.024 }, 00:17:22.024 "auth": { 00:17:22.024 "state": "completed", 00:17:22.024 "digest": "sha512", 00:17:22.024 "dhgroup": "ffdhe8192" 00:17:22.024 } 00:17:22.024 } 00:17:22.024 ]' 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.024 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.281 18:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.215 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.473 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.474 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.474 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.474 18:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.408 00:17:24.408 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.408 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.408 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.666 { 00:17:24.666 "cntlid": 145, 00:17:24.666 "qid": 0, 00:17:24.666 "state": "enabled", 00:17:24.666 "thread": "nvmf_tgt_poll_group_000", 00:17:24.666 "listen_address": { 00:17:24.666 "trtype": "TCP", 00:17:24.666 "adrfam": "IPv4", 00:17:24.666 "traddr": "10.0.0.2", 00:17:24.666 "trsvcid": "4420" 00:17:24.666 }, 00:17:24.666 "peer_address": { 00:17:24.666 "trtype": "TCP", 00:17:24.666 "adrfam": "IPv4", 00:17:24.666 "traddr": "10.0.0.1", 00:17:24.666 "trsvcid": "33376" 00:17:24.666 }, 00:17:24.666 "auth": { 00:17:24.666 "state": "completed", 00:17:24.666 "digest": "sha512", 00:17:24.666 "dhgroup": "ffdhe8192" 00:17:24.666 } 00:17:24.666 } 00:17:24.666 ]' 00:17:24.666 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.667 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.667 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.667 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.667 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.924 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.924 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.924 18:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.182 18:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:M2IzNGQwMjlkNDI0NGFkNzZiZjEwNjlmODQ3OThiYjVlZWM4NTBlZTQ4M2FiZTdi6LdcdQ==: --dhchap-ctrl-secret DHHC-1:03:NzVkMzA2ZjM5ODMwODcwNGYyYzgzMzA2YzY3NDY4NjU0OTQ1Nzg0ZDhhZmIyYTAwZWMzNmU4ZTNkMDJkNGRkMTTz+EI=: 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:26.116 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:26.117 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.117 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.053 request: 00:17:27.053 { 00:17:27.053 "name": "nvme0", 00:17:27.053 "trtype": "tcp", 00:17:27.053 "traddr": "10.0.0.2", 00:17:27.053 "adrfam": "ipv4", 00:17:27.053 "trsvcid": "4420", 00:17:27.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:27.053 "prchk_reftag": false, 00:17:27.053 "prchk_guard": false, 00:17:27.053 "hdgst": false, 00:17:27.053 "ddgst": false, 00:17:27.053 "dhchap_key": "key2", 00:17:27.053 "method": "bdev_nvme_attach_controller", 00:17:27.053 "req_id": 1 00:17:27.053 } 00:17:27.053 Got JSON-RPC error response 00:17:27.053 response: 00:17:27.053 { 00:17:27.053 "code": -5, 00:17:27.053 "message": "Input/output error" 00:17:27.053 } 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.053 18:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:27.053 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:27.619 request: 00:17:27.619 { 00:17:27.619 "name": "nvme0", 00:17:27.619 "trtype": "tcp", 00:17:27.619 "traddr": "10.0.0.2", 00:17:27.619 "adrfam": "ipv4", 00:17:27.619 "trsvcid": "4420", 00:17:27.619 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:27.619 "prchk_reftag": false, 00:17:27.619 "prchk_guard": false, 00:17:27.619 "hdgst": false, 00:17:27.619 "ddgst": false, 00:17:27.619 "dhchap_key": "key1", 00:17:27.619 "dhchap_ctrlr_key": "ckey2", 00:17:27.619 "method": "bdev_nvme_attach_controller", 00:17:27.619 "req_id": 1 00:17:27.619 } 00:17:27.619 Got JSON-RPC error response 00:17:27.619 response: 00:17:27.619 { 00:17:27.619 "code": -5, 00:17:27.619 "message": "Input/output error" 00:17:27.619 } 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.619 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.620 18:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.553 request: 00:17:28.553 { 00:17:28.553 "name": "nvme0", 00:17:28.553 "trtype": "tcp", 00:17:28.553 "traddr": "10.0.0.2", 00:17:28.553 "adrfam": "ipv4", 00:17:28.553 "trsvcid": "4420", 00:17:28.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:28.553 "prchk_reftag": false, 00:17:28.553 "prchk_guard": false, 00:17:28.553 "hdgst": false, 00:17:28.553 "ddgst": false, 00:17:28.553 "dhchap_key": "key1", 00:17:28.553 "dhchap_ctrlr_key": "ckey1", 00:17:28.553 "method": "bdev_nvme_attach_controller", 00:17:28.553 "req_id": 1 00:17:28.553 } 00:17:28.553 Got JSON-RPC error response 00:17:28.553 response: 00:17:28.553 { 00:17:28.553 "code": -5, 00:17:28.553 "message": "Input/output error" 00:17:28.553 } 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2774788 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2774788 ']' 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2774788 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2774788 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2774788' 00:17:28.553 killing process with pid 2774788 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2774788 00:17:28.553 18:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2774788 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2797585 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2797585 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2797585 ']' 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.811 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2797585 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2797585 ']' 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.069 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.636 18:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.586 00:17:30.586 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.586 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.586 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.845 { 00:17:30.845 "cntlid": 1, 00:17:30.845 "qid": 0, 00:17:30.845 "state": "enabled", 00:17:30.845 "thread": "nvmf_tgt_poll_group_000", 00:17:30.845 "listen_address": { 00:17:30.845 "trtype": "TCP", 00:17:30.845 "adrfam": "IPv4", 00:17:30.845 "traddr": "10.0.0.2", 00:17:30.845 "trsvcid": "4420" 00:17:30.845 }, 00:17:30.845 "peer_address": { 00:17:30.845 "trtype": "TCP", 00:17:30.845 "adrfam": "IPv4", 00:17:30.845 "traddr": "10.0.0.1", 00:17:30.845 "trsvcid": "58112" 00:17:30.845 }, 00:17:30.845 "auth": { 00:17:30.845 "state": "completed", 00:17:30.845 "digest": "sha512", 00:17:30.845 "dhgroup": "ffdhe8192" 00:17:30.845 } 00:17:30.845 } 00:17:30.845 ]' 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.845 18:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.845 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.845 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.845 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.104 18:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:YzRkMWVhMjU2MDczM2IwZjEwOTY0MzcxNmVmMWRjZDVmYTZiMzdlM2I2MWM2NTE3ZGI3ZTI1YjVmYWIxNWJkOfox6pg=: 00:17:32.039 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.039 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.039 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.039 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.298 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.556 request: 00:17:32.556 { 00:17:32.556 "name": "nvme0", 00:17:32.556 "trtype": "tcp", 00:17:32.556 "traddr": "10.0.0.2", 00:17:32.556 "adrfam": "ipv4", 00:17:32.556 "trsvcid": "4420", 00:17:32.556 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:32.556 "prchk_reftag": false, 00:17:32.556 "prchk_guard": false, 00:17:32.556 "hdgst": false, 00:17:32.556 "ddgst": false, 00:17:32.556 "dhchap_key": "key3", 00:17:32.556 "method": "bdev_nvme_attach_controller", 00:17:32.556 "req_id": 1 00:17:32.556 } 00:17:32.556 Got JSON-RPC error response 00:17:32.556 response: 00:17:32.556 { 00:17:32.556 "code": -5, 00:17:32.556 "message": "Input/output error" 00:17:32.556 } 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:32.556 18:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.813 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.070 request: 00:17:33.070 { 00:17:33.070 "name": "nvme0", 00:17:33.070 "trtype": "tcp", 00:17:33.070 "traddr": "10.0.0.2", 00:17:33.070 "adrfam": "ipv4", 00:17:33.070 "trsvcid": "4420", 00:17:33.070 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.070 "prchk_reftag": false, 00:17:33.070 "prchk_guard": false, 00:17:33.070 "hdgst": false, 00:17:33.070 "ddgst": false, 00:17:33.070 "dhchap_key": "key3", 00:17:33.070 "method": "bdev_nvme_attach_controller", 00:17:33.070 "req_id": 1 00:17:33.070 } 00:17:33.070 Got JSON-RPC error response 00:17:33.070 response: 00:17:33.070 { 00:17:33.070 "code": -5, 00:17:33.070 "message": "Input/output error" 00:17:33.070 } 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:33.070 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.328 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.586 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.586 request: 00:17:33.586 { 00:17:33.586 "name": "nvme0", 00:17:33.586 "trtype": "tcp", 00:17:33.586 "traddr": "10.0.0.2", 00:17:33.586 "adrfam": "ipv4", 00:17:33.586 "trsvcid": "4420", 00:17:33.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.586 "prchk_reftag": false, 00:17:33.586 "prchk_guard": false, 00:17:33.586 "hdgst": false, 00:17:33.586 "ddgst": false, 00:17:33.586 "dhchap_key": "key0", 00:17:33.586 "dhchap_ctrlr_key": "key1", 00:17:33.586 "method": "bdev_nvme_attach_controller", 00:17:33.586 "req_id": 1 00:17:33.586 } 00:17:33.586 Got JSON-RPC error response 00:17:33.586 response: 00:17:33.586 { 00:17:33.586 "code": -5, 00:17:33.586 "message": "Input/output error" 00:17:33.586 } 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:33.845 18:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:34.103 00:17:34.103 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:34.103 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:34.103 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.362 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.362 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.362 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2774814 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2774814 ']' 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2774814 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2774814 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2774814' 00:17:34.641 killing process with pid 2774814 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2774814 00:17:34.641 18:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2774814 00:17:34.900 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:34.900 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.900 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:34.900 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.159 rmmod nvme_tcp 00:17:35.159 rmmod nvme_fabrics 00:17:35.159 rmmod nvme_keyring 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2797585 ']' 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2797585 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2797585 ']' 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2797585 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2797585 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2797585' 00:17:35.159 killing process with pid 2797585 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2797585 00:17:35.159 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2797585 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.418 18:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3EM /tmp/spdk.key-sha256.Pcj /tmp/spdk.key-sha384.UF1 /tmp/spdk.key-sha512.r8b /tmp/spdk.key-sha512.nJl /tmp/spdk.key-sha384.O2O /tmp/spdk.key-sha256.Zyq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:37.384 00:17:37.384 real 3m9.520s 00:17:37.384 user 7m21.230s 00:17:37.384 sys 0m25.103s 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.384 ************************************ 00:17:37.384 END TEST nvmf_auth_target 00:17:37.384 ************************************ 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.384 ************************************ 00:17:37.384 START TEST nvmf_bdevio_no_huge 00:17:37.384 ************************************ 00:17:37.384 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:37.642 * Looking for test storage... 00:17:37.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.642 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.643 18:00:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:39.542 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:39.542 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:39.542 Found net devices under 0000:09:00.0: cvl_0_0 00:17:39.542 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:39.543 Found net devices under 0000:09:00.1: cvl_0_1 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:17:39.543 00:17:39.543 --- 10.0.0.2 ping statistics --- 00:17:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.543 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:17:39.543 00:17:39.543 --- 10.0.0.1 ping statistics --- 00:17:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.543 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.543 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2800347 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2800347 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2800347 ']' 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.802 18:00:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 [2024-07-24 18:00:25.874757] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:17:39.802 [2024-07-24 18:00:25.874849] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:39.802 [2024-07-24 18:00:25.949138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.060 [2024-07-24 18:00:26.071910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.060 [2024-07-24 18:00:26.071985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.060 [2024-07-24 18:00:26.072002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.060 [2024-07-24 18:00:26.072016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.060 [2024-07-24 18:00:26.072027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.060 [2024-07-24 18:00:26.072122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:40.060 [2024-07-24 18:00:26.072164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:40.060 [2024-07-24 18:00:26.072226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.060 [2024-07-24 18:00:26.072229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 [2024-07-24 18:00:26.827121] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 Malloc0 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.624 [2024-07-24 18:00:26.865046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.624 { 00:17:40.624 "params": { 00:17:40.624 "name": "Nvme$subsystem", 00:17:40.624 "trtype": "$TEST_TRANSPORT", 00:17:40.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.624 "adrfam": "ipv4", 00:17:40.624 "trsvcid": "$NVMF_PORT", 00:17:40.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.624 "hdgst": ${hdgst:-false}, 00:17:40.624 "ddgst": ${ddgst:-false} 00:17:40.624 }, 00:17:40.624 "method": "bdev_nvme_attach_controller" 00:17:40.624 } 00:17:40.624 EOF 00:17:40.624 )") 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:40.624 18:00:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.624 "params": { 00:17:40.624 "name": "Nvme1", 00:17:40.624 "trtype": "tcp", 00:17:40.624 "traddr": "10.0.0.2", 00:17:40.624 "adrfam": "ipv4", 00:17:40.624 "trsvcid": "4420", 00:17:40.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.624 "hdgst": false, 00:17:40.624 "ddgst": false 00:17:40.624 }, 00:17:40.624 "method": "bdev_nvme_attach_controller" 00:17:40.624 }' 00:17:40.881 [2024-07-24 18:00:26.913577] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:17:40.881 [2024-07-24 18:00:26.913667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2800462 ] 00:17:40.881 [2024-07-24 18:00:26.976919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.881 [2024-07-24 18:00:27.092651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.881 [2024-07-24 18:00:27.092696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.881 [2024-07-24 18:00:27.092699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.138 I/O targets: 00:17:41.138 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:41.138 00:17:41.138 00:17:41.138 CUnit - A unit testing framework for C - Version 2.1-3 00:17:41.138 http://cunit.sourceforge.net/ 00:17:41.138 00:17:41.138 00:17:41.138 Suite: bdevio tests on: Nvme1n1 00:17:41.138 Test: blockdev write read block ...passed 00:17:41.138 Test: blockdev write zeroes read block ...passed 00:17:41.138 Test: blockdev write zeroes read no split ...passed 00:17:41.396 Test: blockdev write zeroes read split ...passed 00:17:41.396 Test: blockdev write zeroes read split partial ...passed 00:17:41.396 Test: blockdev reset ...[2024-07-24 18:00:27.503550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.396 [2024-07-24 18:00:27.503656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2fb0 (9): Bad file descriptor 00:17:41.396 [2024-07-24 18:00:27.640064] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.396 passed 00:17:41.396 Test: blockdev write read 8 blocks ...passed 00:17:41.396 Test: blockdev write read size > 128k ...passed 00:17:41.396 Test: blockdev write read invalid size ...passed 00:17:41.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.652 Test: blockdev write read max offset ...passed 00:17:41.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.652 Test: blockdev writev readv 8 blocks ...passed 00:17:41.652 Test: blockdev writev readv 30 x 1block ...passed 00:17:41.652 Test: blockdev writev readv block ...passed 00:17:41.652 Test: blockdev writev readv size > 128k ...passed 00:17:41.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:41.652 Test: blockdev comparev and writev ...[2024-07-24 18:00:27.813955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.813990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.814015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.814031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.814383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.814437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.814453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.814816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.814839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.814861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.814877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.815246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.652 [2024-07-24 18:00:27.815270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:41.652 [2024-07-24 18:00:27.815292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:41.653 [2024-07-24 18:00:27.815308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:41.653 passed 00:17:41.653 Test: blockdev nvme passthru rw ...passed 00:17:41.653 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:00:27.897409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.653 [2024-07-24 18:00:27.897436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:41.653 [2024-07-24 18:00:27.897614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.653 [2024-07-24 18:00:27.897638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:41.653 [2024-07-24 18:00:27.897811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.653 [2024-07-24 18:00:27.897834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:41.653 [2024-07-24 18:00:27.898009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:41.653 [2024-07-24 18:00:27.898033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:41.653 passed 00:17:41.653 Test: blockdev nvme admin passthru ...passed 00:17:41.910 Test: blockdev copy ...passed 00:17:41.910 00:17:41.910 Run Summary: Type Total Ran Passed Failed Inactive 00:17:41.910 suites 1 1 n/a 0 0 00:17:41.910 tests 23 23 23 0 0 00:17:41.910 asserts 152 152 152 0 n/a 00:17:41.910 00:17:41.910 Elapsed time = 1.325 seconds 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.167 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.168 rmmod nvme_tcp 00:17:42.168 rmmod nvme_fabrics 00:17:42.168 rmmod nvme_keyring 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2800347 ']' 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2800347 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2800347 ']' 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2800347 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2800347 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2800347' 00:17:42.168 killing process with pid 2800347 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2800347 00:17:42.168 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2800347 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.733 18:00:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.635 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.635 00:17:44.635 real 0m7.267s 00:17:44.635 user 0m13.789s 00:17:44.635 sys 0m2.584s 00:17:44.635 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:44.635 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.635 ************************************ 00:17:44.635 END TEST nvmf_bdevio_no_huge 00:17:44.635 ************************************ 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.894 ************************************ 00:17:44.894 START TEST nvmf_tls 00:17:44.894 ************************************ 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:44.894 * Looking for test storage... 00:17:44.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.894 18:00:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.894 18:00:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:46.794 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:46.794 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:46.794 Found net devices under 0000:09:00.0: cvl_0_0 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:46.794 Found net devices under 0000:09:00.1: cvl_0_1 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.794 18:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.794 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:17:46.794 00:17:46.794 --- 10.0.0.2 ping statistics --- 00:17:46.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.794 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:17:47.051 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:47.052 00:17:47.052 --- 10.0.0.1 ping statistics --- 00:17:47.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.052 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2802582 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2802582 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2802582 ']' 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.052 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.052 [2024-07-24 18:00:33.140303] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:17:47.052 [2024-07-24 18:00:33.140406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.052 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.052 [2024-07-24 18:00:33.207566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.313 [2024-07-24 18:00:33.321653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.313 [2024-07-24 18:00:33.321719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.313 [2024-07-24 18:00:33.321749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.313 [2024-07-24 18:00:33.321761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.313 [2024-07-24 18:00:33.321771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.313 [2024-07-24 18:00:33.321799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:47.313 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:47.570 true 00:17:47.570 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.570 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:47.828 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:47.828 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:47.828 18:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.086 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.086 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:48.349 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:48.349 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:48.350 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:48.611 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.611 18:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:48.868 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:48.868 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:48.868 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.868 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:49.125 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:49.125 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:49.125 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:49.382 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.382 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:49.640 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:49.640 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:49.640 18:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:49.898 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.898 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ZrJATc7J1s 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.a6RSxrz8K8 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ZrJATc7J1s 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.a6RSxrz8K8 00:17:50.156 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.415 18:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:50.982 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ZrJATc7J1s 00:17:50.982 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZrJATc7J1s 00:17:50.982 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.240 [2024-07-24 18:00:37.297505] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.240 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.497 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.755 [2024-07-24 18:00:37.830942] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.755 [2024-07-24 18:00:37.831198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.755 18:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.014 malloc0 00:17:52.014 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.272 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZrJATc7J1s 00:17:52.530 [2024-07-24 18:00:38.559608] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:52.530 18:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZrJATc7J1s 00:17:52.530 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.588 Initializing NVMe Controllers 00:18:02.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.588 Initialization complete. Launching workers. 00:18:02.588 ======================================================== 00:18:02.588 Latency(us) 00:18:02.588 Device Information : IOPS MiB/s Average min max 00:18:02.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7867.35 30.73 8137.55 1241.54 12267.54 00:18:02.588 ======================================================== 00:18:02.588 Total : 7867.35 30.73 8137.55 1241.54 12267.54 00:18:02.588 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrJATc7J1s 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZrJATc7J1s' 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2804474 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2804474 /var/tmp/bdevperf.sock 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2804474 ']' 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.588 18:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.588 [2024-07-24 18:00:48.750952] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:02.588 [2024-07-24 18:00:48.751045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804474 ] 00:18:02.588 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.588 [2024-07-24 18:00:48.807242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.845 [2024-07-24 18:00:48.911537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.845 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.845 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.845 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZrJATc7J1s 00:18:03.102 [2024-07-24 18:00:49.260918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.102 [2024-07-24 18:00:49.261035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.102 TLSTESTn1 00:18:03.102 18:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.393 Running I/O for 10 seconds... 00:18:13.362 00:18:13.362 Latency(us) 00:18:13.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.362 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.362 Verification LBA range: start 0x0 length 0x2000 00:18:13.362 TLSTESTn1 : 10.04 3061.18 11.96 0.00 0.00 41712.87 5971.06 76507.21 00:18:13.362 =================================================================================================================== 00:18:13.362 Total : 3061.18 11.96 0.00 0.00 41712.87 5971.06 76507.21 00:18:13.362 0 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2804474 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2804474 ']' 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2804474 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2804474 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2804474' 00:18:13.362 killing process with pid 2804474 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2804474 00:18:13.362 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.362 00:18:13.362 Latency(us) 00:18:13.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.362 =================================================================================================================== 00:18:13.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.362 [2024-07-24 18:00:59.556550] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.362 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2804474 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6RSxrz8K8 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6RSxrz8K8 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a6RSxrz8K8 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a6RSxrz8K8' 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2805736 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2805736 /var/tmp/bdevperf.sock 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2805736 ']' 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.620 18:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.620 [2024-07-24 18:00:59.873429] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:13.620 [2024-07-24 18:00:59.873536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805736 ] 00:18:13.878 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.878 [2024-07-24 18:00:59.938456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.878 [2024-07-24 18:01:00.051237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.134 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.134 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.134 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a6RSxrz8K8 00:18:14.391 [2024-07-24 18:01:00.435397] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.391 [2024-07-24 18:01:00.435502] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.391 [2024-07-24 18:01:00.447270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.391 [2024-07-24 18:01:00.448309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1103f90 (107): Transport endpoint is not connected 00:18:14.391 [2024-07-24 18:01:00.449286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1103f90 (9): Bad file descriptor 00:18:14.391 [2024-07-24 18:01:00.450284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.391 [2024-07-24 18:01:00.450304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.391 [2024-07-24 18:01:00.450321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.391 request: 00:18:14.391 { 00:18:14.391 "name": "TLSTEST", 00:18:14.391 "trtype": "tcp", 00:18:14.391 "traddr": "10.0.0.2", 00:18:14.391 "adrfam": "ipv4", 00:18:14.391 "trsvcid": "4420", 00:18:14.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.391 "prchk_reftag": false, 00:18:14.391 "prchk_guard": false, 00:18:14.391 "hdgst": false, 00:18:14.391 "ddgst": false, 00:18:14.391 "psk": "/tmp/tmp.a6RSxrz8K8", 00:18:14.391 "method": "bdev_nvme_attach_controller", 00:18:14.391 "req_id": 1 00:18:14.391 } 00:18:14.391 Got JSON-RPC error response 00:18:14.391 response: 00:18:14.391 { 00:18:14.391 "code": -5, 00:18:14.391 "message": "Input/output error" 00:18:14.391 } 00:18:14.391 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2805736 00:18:14.391 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2805736 ']' 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2805736 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805736 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805736' 00:18:14.392 killing process with pid 2805736 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2805736 00:18:14.392 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.392 00:18:14.392 Latency(us) 00:18:14.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.392 =================================================================================================================== 00:18:14.392 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.392 [2024-07-24 18:01:00.499543] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.392 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2805736 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrJATc7J1s 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrJATc7J1s 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrJATc7J1s 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZrJATc7J1s' 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2805808 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2805808 /var/tmp/bdevperf.sock 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2805808 ']' 00:18:14.649 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.650 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.650 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.650 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.650 18:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.650 [2024-07-24 18:01:00.804503] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:14.650 [2024-07-24 18:01:00.804596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805808 ] 00:18:14.650 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.650 [2024-07-24 18:01:00.860794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.921 [2024-07-24 18:01:00.971535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.921 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.921 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.921 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ZrJATc7J1s 00:18:15.179 [2024-07-24 18:01:01.306751] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.179 [2024-07-24 18:01:01.306865] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.179 [2024-07-24 18:01:01.311921] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:15.179 [2024-07-24 18:01:01.311955] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:15.179 [2024-07-24 18:01:01.312003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.179 [2024-07-24 18:01:01.312540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639f90 (107): Transport endpoint is not connected 00:18:15.179 [2024-07-24 18:01:01.313527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639f90 (9): Bad file descriptor 00:18:15.179 [2024-07-24 18:01:01.314526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.179 [2024-07-24 18:01:01.314544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.180 [2024-07-24 18:01:01.314569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.180 request: 00:18:15.180 { 00:18:15.180 "name": "TLSTEST", 00:18:15.180 "trtype": "tcp", 00:18:15.180 "traddr": "10.0.0.2", 00:18:15.180 "adrfam": "ipv4", 00:18:15.180 "trsvcid": "4420", 00:18:15.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:15.180 "prchk_reftag": false, 00:18:15.180 "prchk_guard": false, 00:18:15.180 "hdgst": false, 00:18:15.180 "ddgst": false, 00:18:15.180 "psk": "/tmp/tmp.ZrJATc7J1s", 00:18:15.180 "method": "bdev_nvme_attach_controller", 00:18:15.180 "req_id": 1 00:18:15.180 } 00:18:15.180 Got JSON-RPC error response 00:18:15.180 response: 00:18:15.180 { 00:18:15.180 "code": -5, 00:18:15.180 "message": "Input/output error" 00:18:15.180 } 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2805808 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2805808 ']' 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2805808 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805808 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805808' 00:18:15.180 killing process with pid 2805808 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2805808 00:18:15.180 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.180 00:18:15.180 Latency(us) 00:18:15.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.180 =================================================================================================================== 00:18:15.180 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.180 [2024-07-24 18:01:01.353621] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.180 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2805808 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrJATc7J1s 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrJATc7J1s 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrJATc7J1s 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZrJATc7J1s' 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2805948 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2805948 /var/tmp/bdevperf.sock 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2805948 ']' 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.438 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.438 [2024-07-24 18:01:01.637642] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:15.438 [2024-07-24 18:01:01.637737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805948 ] 00:18:15.438 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.438 [2024-07-24 18:01:01.695179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.696 [2024-07-24 18:01:01.797426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.696 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.696 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.696 18:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZrJATc7J1s 00:18:15.954 [2024-07-24 18:01:02.146477] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.954 [2024-07-24 18:01:02.146602] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.954 [2024-07-24 18:01:02.151836] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.954 [2024-07-24 18:01:02.151870] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:15.954 [2024-07-24 18:01:02.151921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.954 [2024-07-24 18:01:02.152415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcaf90 (107): Transport endpoint is not connected 00:18:15.954 [2024-07-24 18:01:02.153396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcaf90 (9): Bad file descriptor 00:18:15.954 [2024-07-24 18:01:02.154399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:15.954 [2024-07-24 18:01:02.154434] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.954 [2024-07-24 18:01:02.154451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:15.954 request: 00:18:15.954 { 00:18:15.954 "name": "TLSTEST", 00:18:15.954 "trtype": "tcp", 00:18:15.954 "traddr": "10.0.0.2", 00:18:15.954 "adrfam": "ipv4", 00:18:15.954 "trsvcid": "4420", 00:18:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.954 "prchk_reftag": false, 00:18:15.954 "prchk_guard": false, 00:18:15.954 "hdgst": false, 00:18:15.954 "ddgst": false, 00:18:15.954 "psk": "/tmp/tmp.ZrJATc7J1s", 00:18:15.954 "method": "bdev_nvme_attach_controller", 00:18:15.954 "req_id": 1 00:18:15.954 } 00:18:15.954 Got JSON-RPC error response 00:18:15.954 response: 00:18:15.954 { 00:18:15.954 "code": -5, 00:18:15.954 "message": "Input/output error" 00:18:15.954 } 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2805948 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2805948 ']' 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2805948 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805948 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805948' 00:18:15.954 killing process with pid 2805948 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2805948 00:18:15.954 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.954 00:18:15.954 Latency(us) 00:18:15.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.954 =================================================================================================================== 00:18:15.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.954 [2024-07-24 18:01:02.200158] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.954 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2805948 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2806084 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2806084 /var/tmp/bdevperf.sock 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2806084 ']' 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.212 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.471 [2024-07-24 18:01:02.503077] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:16.471 [2024-07-24 18:01:02.503261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806084 ] 00:18:16.471 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.471 [2024-07-24 18:01:02.564130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.471 [2024-07-24 18:01:02.667381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.729 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.729 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.729 18:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:16.987 [2024-07-24 18:01:03.004910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.987 [2024-07-24 18:01:03.006759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59c770 (9): Bad file descriptor 00:18:16.987 [2024-07-24 18:01:03.007756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.987 [2024-07-24 18:01:03.007776] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.987 [2024-07-24 18:01:03.007801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.987 request: 00:18:16.987 { 00:18:16.987 "name": "TLSTEST", 00:18:16.987 "trtype": "tcp", 00:18:16.987 "traddr": "10.0.0.2", 00:18:16.987 "adrfam": "ipv4", 00:18:16.987 "trsvcid": "4420", 00:18:16.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.987 "prchk_reftag": false, 00:18:16.987 "prchk_guard": false, 00:18:16.987 "hdgst": false, 00:18:16.987 "ddgst": false, 00:18:16.987 "method": "bdev_nvme_attach_controller", 00:18:16.987 "req_id": 1 00:18:16.987 } 00:18:16.987 Got JSON-RPC error response 00:18:16.987 response: 00:18:16.987 { 00:18:16.987 "code": -5, 00:18:16.987 "message": "Input/output error" 00:18:16.987 } 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2806084 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2806084 ']' 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2806084 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2806084 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2806084' 00:18:16.987 killing process with pid 2806084 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2806084 00:18:16.987 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.987 00:18:16.987 Latency(us) 00:18:16.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.987 =================================================================================================================== 00:18:16.987 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.987 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2806084 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2802582 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2802582 ']' 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2802582 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802582 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802582' 00:18:17.253 killing process with pid 2802582 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2802582 00:18:17.253 [2024-07-24 18:01:03.333126] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:17.253 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2802582 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.T1x8jMk37A 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.T1x8jMk37A 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2806237 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2806237 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2806237 ']' 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.514 18:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.514 [2024-07-24 18:01:03.736484] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:17.514 [2024-07-24 18:01:03.736564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.514 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.772 [2024-07-24 18:01:03.802028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.772 [2024-07-24 18:01:03.911819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.772 [2024-07-24 18:01:03.911873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.772 [2024-07-24 18:01:03.911886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.772 [2024-07-24 18:01:03.911896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.772 [2024-07-24 18:01:03.911905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.772 [2024-07-24 18:01:03.911933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.772 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.772 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.772 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.772 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.772 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.030 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.030 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:18.030 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T1x8jMk37A 00:18:18.030 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.287 [2024-07-24 18:01:04.325644] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.287 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.545 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.803 [2024-07-24 18:01:04.814977] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.803 [2024-07-24 18:01:04.815248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.803 18:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:18.803 malloc0 00:18:19.061 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.061 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:19.319 [2024-07-24 18:01:05.557274] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T1x8jMk37A 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T1x8jMk37A' 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2806518 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2806518 /var/tmp/bdevperf.sock 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2806518 ']' 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.319 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.577 [2024-07-24 18:01:05.617381] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:19.577 [2024-07-24 18:01:05.617470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806518 ] 00:18:19.577 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.577 [2024-07-24 18:01:05.675628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.577 [2024-07-24 18:01:05.779655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.835 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.835 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:19.835 18:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:20.099 [2024-07-24 18:01:06.113584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.099 [2024-07-24 18:01:06.113706] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:20.099 TLSTESTn1 00:18:20.099 18:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.099 Running I/O for 10 seconds... 00:18:32.297 00:18:32.297 Latency(us) 00:18:32.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:32.297 Verification LBA range: start 0x0 length 0x2000 00:18:32.297 TLSTESTn1 : 10.04 3049.89 11.91 0.00 0.00 41864.99 5898.24 60972.75 00:18:32.297 =================================================================================================================== 00:18:32.297 Total : 3049.89 11.91 0.00 0.00 41864.99 5898.24 60972.75 00:18:32.297 0 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2806518 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2806518 ']' 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2806518 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2806518 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2806518' 00:18:32.297 killing process with pid 2806518 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2806518 00:18:32.297 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.297 00:18:32.297 Latency(us) 00:18:32.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.297 =================================================================================================================== 00:18:32.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.297 [2024-07-24 18:01:16.420407] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2806518 00:18:32.297 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.T1x8jMk37A 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T1x8jMk37A 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T1x8jMk37A 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T1x8jMk37A 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T1x8jMk37A' 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2807726 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2807726 /var/tmp/bdevperf.sock 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2807726 ']' 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.298 18:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.298 [2024-07-24 18:01:16.737453] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:32.298 [2024-07-24 18:01:16.737549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807726 ] 00:18:32.298 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.298 [2024-07-24 18:01:16.795326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.298 [2024-07-24 18:01:16.898123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:32.298 [2024-07-24 18:01:17.251736] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.298 [2024-07-24 18:01:17.251818] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:32.298 [2024-07-24 18:01:17.251833] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.T1x8jMk37A 00:18:32.298 request: 00:18:32.298 { 00:18:32.298 "name": "TLSTEST", 00:18:32.298 "trtype": "tcp", 00:18:32.298 "traddr": "10.0.0.2", 00:18:32.298 "adrfam": "ipv4", 00:18:32.298 "trsvcid": "4420", 00:18:32.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.298 "prchk_reftag": false, 00:18:32.298 "prchk_guard": false, 00:18:32.298 "hdgst": false, 00:18:32.298 "ddgst": false, 00:18:32.298 "psk": "/tmp/tmp.T1x8jMk37A", 00:18:32.298 "method": "bdev_nvme_attach_controller", 00:18:32.298 "req_id": 1 00:18:32.298 } 00:18:32.298 Got JSON-RPC error response 00:18:32.298 response: 00:18:32.298 { 00:18:32.298 "code": -1, 00:18:32.298 "message": "Operation not permitted" 00:18:32.298 } 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2807726 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2807726 ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2807726 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807726 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807726' 00:18:32.298 killing process with pid 2807726 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2807726 00:18:32.298 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.298 00:18:32.298 Latency(us) 00:18:32.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.298 =================================================================================================================== 00:18:32.298 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2807726 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2806237 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2806237 ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2806237 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2806237 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2806237' 00:18:32.298 killing process with pid 2806237 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2806237 00:18:32.298 [2024-07-24 18:01:17.591281] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2806237 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2807871 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2807871 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2807871 ']' 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.298 18:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.298 [2024-07-24 18:01:17.929869] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:32.298 [2024-07-24 18:01:17.929965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.298 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.298 [2024-07-24 18:01:17.996537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.298 [2024-07-24 18:01:18.104282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.298 [2024-07-24 18:01:18.104334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.298 [2024-07-24 18:01:18.104358] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.298 [2024-07-24 18:01:18.104369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.298 [2024-07-24 18:01:18.104378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.299 [2024-07-24 18:01:18.104405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T1x8jMk37A 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.299 [2024-07-24 18:01:18.473744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.299 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.557 18:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:32.815 [2024-07-24 18:01:19.055268] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.815 [2024-07-24 18:01:19.055498] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.815 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.073 malloc0 00:18:33.332 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.332 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:33.590 [2024-07-24 18:01:19.825066] tcp.c:3639:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:33.590 [2024-07-24 18:01:19.825132] tcp.c:3725:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:33.590 [2024-07-24 18:01:19.825173] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:33.590 request: 00:18:33.590 { 00:18:33.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.590 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.590 "psk": "/tmp/tmp.T1x8jMk37A", 00:18:33.590 "method": "nvmf_subsystem_add_host", 00:18:33.590 "req_id": 1 00:18:33.590 } 00:18:33.590 Got JSON-RPC error response 00:18:33.590 response: 00:18:33.590 { 00:18:33.590 "code": -32603, 00:18:33.590 "message": "Internal error" 00:18:33.590 } 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2807871 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2807871 ']' 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2807871 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.590 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807871 00:18:33.848 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:33.848 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:33.848 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807871' 00:18:33.848 killing process with pid 2807871 00:18:33.848 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2807871 00:18:33.848 18:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2807871 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.T1x8jMk37A 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2808188 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2808188 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808188 ']' 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.108 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.108 [2024-07-24 18:01:20.234688] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:34.108 [2024-07-24 18:01:20.234776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.108 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.108 [2024-07-24 18:01:20.304389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.371 [2024-07-24 18:01:20.425803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.371 [2024-07-24 18:01:20.425868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.371 [2024-07-24 18:01:20.425885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.371 [2024-07-24 18:01:20.425899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.371 [2024-07-24 18:01:20.425910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.371 [2024-07-24 18:01:20.425952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T1x8jMk37A 00:18:34.371 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.629 [2024-07-24 18:01:20.805543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.629 18:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.887 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.145 [2024-07-24 18:01:21.282795] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.145 [2024-07-24 18:01:21.283026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.145 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.403 malloc0 00:18:35.403 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.662 18:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:35.920 [2024-07-24 18:01:22.165485] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2808466 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2808466 /var/tmp/bdevperf.sock 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808466 ']' 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.920 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.178 [2024-07-24 18:01:22.227841] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:36.178 [2024-07-24 18:01:22.227928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808466 ] 00:18:36.178 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.178 [2024-07-24 18:01:22.285370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.178 [2024-07-24 18:01:22.389860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.436 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.436 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:36.436 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:36.694 [2024-07-24 18:01:22.735732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.694 [2024-07-24 18:01:22.735829] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:36.694 TLSTESTn1 00:18:36.694 18:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:36.952 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:36.952 "subsystems": [ 00:18:36.952 { 00:18:36.952 "subsystem": "keyring", 00:18:36.952 "config": [] 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "subsystem": "iobuf", 00:18:36.952 "config": [ 00:18:36.952 { 00:18:36.952 "method": "iobuf_set_options", 00:18:36.952 "params": { 00:18:36.952 "small_pool_count": 8192, 00:18:36.952 "large_pool_count": 1024, 00:18:36.952 "small_bufsize": 8192, 00:18:36.952 "large_bufsize": 135168 00:18:36.952 } 00:18:36.952 } 00:18:36.952 ] 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "subsystem": "sock", 00:18:36.952 "config": [ 00:18:36.952 { 00:18:36.952 "method": "sock_set_default_impl", 00:18:36.952 "params": { 00:18:36.952 "impl_name": "posix" 00:18:36.952 } 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "method": "sock_impl_set_options", 00:18:36.952 "params": { 00:18:36.952 "impl_name": "ssl", 00:18:36.952 "recv_buf_size": 4096, 00:18:36.952 "send_buf_size": 4096, 00:18:36.952 "enable_recv_pipe": true, 00:18:36.952 "enable_quickack": false, 00:18:36.952 "enable_placement_id": 0, 00:18:36.952 "enable_zerocopy_send_server": true, 00:18:36.952 "enable_zerocopy_send_client": false, 00:18:36.952 "zerocopy_threshold": 0, 00:18:36.952 "tls_version": 0, 00:18:36.952 "enable_ktls": false 00:18:36.952 } 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "method": "sock_impl_set_options", 00:18:36.952 "params": { 00:18:36.952 "impl_name": "posix", 00:18:36.952 "recv_buf_size": 2097152, 00:18:36.952 "send_buf_size": 2097152, 00:18:36.952 "enable_recv_pipe": true, 00:18:36.952 "enable_quickack": false, 00:18:36.952 "enable_placement_id": 0, 00:18:36.952 "enable_zerocopy_send_server": true, 00:18:36.952 "enable_zerocopy_send_client": false, 00:18:36.952 "zerocopy_threshold": 0, 00:18:36.952 "tls_version": 0, 00:18:36.952 "enable_ktls": false 00:18:36.952 } 00:18:36.952 } 00:18:36.952 ] 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "subsystem": "vmd", 00:18:36.952 "config": [] 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "subsystem": "accel", 00:18:36.952 "config": [ 00:18:36.952 { 00:18:36.952 "method": "accel_set_options", 00:18:36.952 "params": { 00:18:36.952 "small_cache_size": 128, 00:18:36.952 "large_cache_size": 16, 00:18:36.952 "task_count": 2048, 00:18:36.952 "sequence_count": 2048, 00:18:36.952 "buf_count": 2048 00:18:36.952 } 00:18:36.952 } 00:18:36.952 ] 00:18:36.952 }, 00:18:36.952 { 00:18:36.952 "subsystem": "bdev", 00:18:36.952 "config": [ 00:18:36.952 { 00:18:36.952 "method": "bdev_set_options", 00:18:36.953 "params": { 00:18:36.953 "bdev_io_pool_size": 65535, 00:18:36.953 "bdev_io_cache_size": 256, 00:18:36.953 "bdev_auto_examine": true, 00:18:36.953 "iobuf_small_cache_size": 128, 00:18:36.953 "iobuf_large_cache_size": 16 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_raid_set_options", 00:18:36.953 "params": { 00:18:36.953 "process_window_size_kb": 1024, 00:18:36.953 "process_max_bandwidth_mb_sec": 0 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_iscsi_set_options", 00:18:36.953 "params": { 00:18:36.953 "timeout_sec": 30 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_nvme_set_options", 00:18:36.953 "params": { 00:18:36.953 "action_on_timeout": "none", 00:18:36.953 "timeout_us": 0, 00:18:36.953 "timeout_admin_us": 0, 00:18:36.953 "keep_alive_timeout_ms": 10000, 00:18:36.953 "arbitration_burst": 0, 00:18:36.953 "low_priority_weight": 0, 00:18:36.953 "medium_priority_weight": 0, 00:18:36.953 "high_priority_weight": 0, 00:18:36.953 "nvme_adminq_poll_period_us": 10000, 00:18:36.953 "nvme_ioq_poll_period_us": 0, 00:18:36.953 "io_queue_requests": 0, 00:18:36.953 "delay_cmd_submit": true, 00:18:36.953 "transport_retry_count": 4, 00:18:36.953 "bdev_retry_count": 3, 00:18:36.953 "transport_ack_timeout": 0, 00:18:36.953 "ctrlr_loss_timeout_sec": 0, 00:18:36.953 "reconnect_delay_sec": 0, 00:18:36.953 "fast_io_fail_timeout_sec": 0, 00:18:36.953 "disable_auto_failback": false, 00:18:36.953 "generate_uuids": false, 00:18:36.953 "transport_tos": 0, 00:18:36.953 "nvme_error_stat": false, 00:18:36.953 "rdma_srq_size": 0, 00:18:36.953 "io_path_stat": false, 00:18:36.953 "allow_accel_sequence": false, 00:18:36.953 "rdma_max_cq_size": 0, 00:18:36.953 "rdma_cm_event_timeout_ms": 0, 00:18:36.953 "dhchap_digests": [ 00:18:36.953 "sha256", 00:18:36.953 "sha384", 00:18:36.953 "sha512" 00:18:36.953 ], 00:18:36.953 "dhchap_dhgroups": [ 00:18:36.953 "null", 00:18:36.953 "ffdhe2048", 00:18:36.953 "ffdhe3072", 00:18:36.953 "ffdhe4096", 00:18:36.953 "ffdhe6144", 00:18:36.953 "ffdhe8192" 00:18:36.953 ] 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_nvme_set_hotplug", 00:18:36.953 "params": { 00:18:36.953 "period_us": 100000, 00:18:36.953 "enable": false 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_malloc_create", 00:18:36.953 "params": { 00:18:36.953 "name": "malloc0", 00:18:36.953 "num_blocks": 8192, 00:18:36.953 "block_size": 4096, 00:18:36.953 "physical_block_size": 4096, 00:18:36.953 "uuid": "53ea43c6-45ea-41a5-8544-a181dc5944cc", 00:18:36.953 "optimal_io_boundary": 0, 00:18:36.953 "md_size": 0, 00:18:36.953 "dif_type": 0, 00:18:36.953 "dif_is_head_of_md": false, 00:18:36.953 "dif_pi_format": 0 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "bdev_wait_for_examine" 00:18:36.953 } 00:18:36.953 ] 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "subsystem": "nbd", 00:18:36.953 "config": [] 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "subsystem": "scheduler", 00:18:36.953 "config": [ 00:18:36.953 { 00:18:36.953 "method": "framework_set_scheduler", 00:18:36.953 "params": { 00:18:36.953 "name": "static" 00:18:36.953 } 00:18:36.953 } 00:18:36.953 ] 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "subsystem": "nvmf", 00:18:36.953 "config": [ 00:18:36.953 { 00:18:36.953 "method": "nvmf_set_config", 00:18:36.953 "params": { 00:18:36.953 "discovery_filter": "match_any", 00:18:36.953 "admin_cmd_passthru": { 00:18:36.953 "identify_ctrlr": false 00:18:36.953 } 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_set_max_subsystems", 00:18:36.953 "params": { 00:18:36.953 "max_subsystems": 1024 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_set_crdt", 00:18:36.953 "params": { 00:18:36.953 "crdt1": 0, 00:18:36.953 "crdt2": 0, 00:18:36.953 "crdt3": 0 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_create_transport", 00:18:36.953 "params": { 00:18:36.953 "trtype": "TCP", 00:18:36.953 "max_queue_depth": 128, 00:18:36.953 "max_io_qpairs_per_ctrlr": 127, 00:18:36.953 "in_capsule_data_size": 4096, 00:18:36.953 "max_io_size": 131072, 00:18:36.953 "io_unit_size": 131072, 00:18:36.953 "max_aq_depth": 128, 00:18:36.953 "num_shared_buffers": 511, 00:18:36.953 "buf_cache_size": 4294967295, 00:18:36.953 "dif_insert_or_strip": false, 00:18:36.953 "zcopy": false, 00:18:36.953 "c2h_success": false, 00:18:36.953 "sock_priority": 0, 00:18:36.953 "abort_timeout_sec": 1, 00:18:36.953 "ack_timeout": 0, 00:18:36.953 "data_wr_pool_size": 0 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_create_subsystem", 00:18:36.953 "params": { 00:18:36.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.953 "allow_any_host": false, 00:18:36.953 "serial_number": "SPDK00000000000001", 00:18:36.953 "model_number": "SPDK bdev Controller", 00:18:36.953 "max_namespaces": 10, 00:18:36.953 "min_cntlid": 1, 00:18:36.953 "max_cntlid": 65519, 00:18:36.953 "ana_reporting": false 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_subsystem_add_host", 00:18:36.953 "params": { 00:18:36.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.953 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.953 "psk": "/tmp/tmp.T1x8jMk37A" 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_subsystem_add_ns", 00:18:36.953 "params": { 00:18:36.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.953 "namespace": { 00:18:36.953 "nsid": 1, 00:18:36.953 "bdev_name": "malloc0", 00:18:36.953 "nguid": "53EA43C645EA41A58544A181DC5944CC", 00:18:36.953 "uuid": "53ea43c6-45ea-41a5-8544-a181dc5944cc", 00:18:36.953 "no_auto_visible": false 00:18:36.953 } 00:18:36.953 } 00:18:36.953 }, 00:18:36.953 { 00:18:36.953 "method": "nvmf_subsystem_add_listener", 00:18:36.953 "params": { 00:18:36.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.953 "listen_address": { 00:18:36.953 "trtype": "TCP", 00:18:36.953 "adrfam": "IPv4", 00:18:36.953 "traddr": "10.0.0.2", 00:18:36.953 "trsvcid": "4420" 00:18:36.953 }, 00:18:36.953 "secure_channel": true 00:18:36.953 } 00:18:36.953 } 00:18:36.953 ] 00:18:36.953 } 00:18:36.953 ] 00:18:36.953 }' 00:18:36.953 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:37.211 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:37.211 "subsystems": [ 00:18:37.211 { 00:18:37.211 "subsystem": "keyring", 00:18:37.211 "config": [] 00:18:37.211 }, 00:18:37.211 { 00:18:37.211 "subsystem": "iobuf", 00:18:37.211 "config": [ 00:18:37.211 { 00:18:37.211 "method": "iobuf_set_options", 00:18:37.211 "params": { 00:18:37.211 "small_pool_count": 8192, 00:18:37.211 "large_pool_count": 1024, 00:18:37.211 "small_bufsize": 8192, 00:18:37.211 "large_bufsize": 135168 00:18:37.211 } 00:18:37.211 } 00:18:37.211 ] 00:18:37.211 }, 00:18:37.211 { 00:18:37.211 "subsystem": "sock", 00:18:37.211 "config": [ 00:18:37.211 { 00:18:37.212 "method": "sock_set_default_impl", 00:18:37.212 "params": { 00:18:37.212 "impl_name": "posix" 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "sock_impl_set_options", 00:18:37.212 "params": { 00:18:37.212 "impl_name": "ssl", 00:18:37.212 "recv_buf_size": 4096, 00:18:37.212 "send_buf_size": 4096, 00:18:37.212 "enable_recv_pipe": true, 00:18:37.212 "enable_quickack": false, 00:18:37.212 "enable_placement_id": 0, 00:18:37.212 "enable_zerocopy_send_server": true, 00:18:37.212 "enable_zerocopy_send_client": false, 00:18:37.212 "zerocopy_threshold": 0, 00:18:37.212 "tls_version": 0, 00:18:37.212 "enable_ktls": false 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "sock_impl_set_options", 00:18:37.212 "params": { 00:18:37.212 "impl_name": "posix", 00:18:37.212 "recv_buf_size": 2097152, 00:18:37.212 "send_buf_size": 2097152, 00:18:37.212 "enable_recv_pipe": true, 00:18:37.212 "enable_quickack": false, 00:18:37.212 "enable_placement_id": 0, 00:18:37.212 "enable_zerocopy_send_server": true, 00:18:37.212 "enable_zerocopy_send_client": false, 00:18:37.212 "zerocopy_threshold": 0, 00:18:37.212 "tls_version": 0, 00:18:37.212 "enable_ktls": false 00:18:37.212 } 00:18:37.212 } 00:18:37.212 ] 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "subsystem": "vmd", 00:18:37.212 "config": [] 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "subsystem": "accel", 00:18:37.212 "config": [ 00:18:37.212 { 00:18:37.212 "method": "accel_set_options", 00:18:37.212 "params": { 00:18:37.212 "small_cache_size": 128, 00:18:37.212 "large_cache_size": 16, 00:18:37.212 "task_count": 2048, 00:18:37.212 "sequence_count": 2048, 00:18:37.212 "buf_count": 2048 00:18:37.212 } 00:18:37.212 } 00:18:37.212 ] 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "subsystem": "bdev", 00:18:37.212 "config": [ 00:18:37.212 { 00:18:37.212 "method": "bdev_set_options", 00:18:37.212 "params": { 00:18:37.212 "bdev_io_pool_size": 65535, 00:18:37.212 "bdev_io_cache_size": 256, 00:18:37.212 "bdev_auto_examine": true, 00:18:37.212 "iobuf_small_cache_size": 128, 00:18:37.212 "iobuf_large_cache_size": 16 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "bdev_raid_set_options", 00:18:37.212 "params": { 00:18:37.212 "process_window_size_kb": 1024, 00:18:37.212 "process_max_bandwidth_mb_sec": 0 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "bdev_iscsi_set_options", 00:18:37.212 "params": { 00:18:37.212 "timeout_sec": 30 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "bdev_nvme_set_options", 00:18:37.212 "params": { 00:18:37.212 "action_on_timeout": "none", 00:18:37.212 "timeout_us": 0, 00:18:37.212 "timeout_admin_us": 0, 00:18:37.212 "keep_alive_timeout_ms": 10000, 00:18:37.212 "arbitration_burst": 0, 00:18:37.212 "low_priority_weight": 0, 00:18:37.212 "medium_priority_weight": 0, 00:18:37.212 "high_priority_weight": 0, 00:18:37.212 "nvme_adminq_poll_period_us": 10000, 00:18:37.212 "nvme_ioq_poll_period_us": 0, 00:18:37.212 "io_queue_requests": 512, 00:18:37.212 "delay_cmd_submit": true, 00:18:37.212 "transport_retry_count": 4, 00:18:37.212 "bdev_retry_count": 3, 00:18:37.212 "transport_ack_timeout": 0, 00:18:37.212 "ctrlr_loss_timeout_sec": 0, 00:18:37.212 "reconnect_delay_sec": 0, 00:18:37.212 "fast_io_fail_timeout_sec": 0, 00:18:37.212 "disable_auto_failback": false, 00:18:37.212 "generate_uuids": false, 00:18:37.212 "transport_tos": 0, 00:18:37.212 "nvme_error_stat": false, 00:18:37.212 "rdma_srq_size": 0, 00:18:37.212 "io_path_stat": false, 00:18:37.212 "allow_accel_sequence": false, 00:18:37.212 "rdma_max_cq_size": 0, 00:18:37.212 "rdma_cm_event_timeout_ms": 0, 00:18:37.212 "dhchap_digests": [ 00:18:37.212 "sha256", 00:18:37.212 "sha384", 00:18:37.212 "sha512" 00:18:37.212 ], 00:18:37.212 "dhchap_dhgroups": [ 00:18:37.212 "null", 00:18:37.212 "ffdhe2048", 00:18:37.212 "ffdhe3072", 00:18:37.212 "ffdhe4096", 00:18:37.212 "ffdhe6144", 00:18:37.212 "ffdhe8192" 00:18:37.212 ] 00:18:37.212 } 00:18:37.212 }, 00:18:37.212 { 00:18:37.212 "method": "bdev_nvme_attach_controller", 00:18:37.212 "params": { 00:18:37.212 "name": "TLSTEST", 00:18:37.212 "trtype": "TCP", 00:18:37.212 "adrfam": "IPv4", 00:18:37.213 "traddr": "10.0.0.2", 00:18:37.213 "trsvcid": "4420", 00:18:37.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.213 "prchk_reftag": false, 00:18:37.213 "prchk_guard": false, 00:18:37.213 "ctrlr_loss_timeout_sec": 0, 00:18:37.213 "reconnect_delay_sec": 0, 00:18:37.213 "fast_io_fail_timeout_sec": 0, 00:18:37.213 "psk": "/tmp/tmp.T1x8jMk37A", 00:18:37.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.213 "hdgst": false, 00:18:37.213 "ddgst": false 00:18:37.213 } 00:18:37.213 }, 00:18:37.213 { 00:18:37.213 "method": "bdev_nvme_set_hotplug", 00:18:37.213 "params": { 00:18:37.213 "period_us": 100000, 00:18:37.213 "enable": false 00:18:37.213 } 00:18:37.213 }, 00:18:37.213 { 00:18:37.213 "method": "bdev_wait_for_examine" 00:18:37.213 } 00:18:37.213 ] 00:18:37.213 }, 00:18:37.213 { 00:18:37.213 "subsystem": "nbd", 00:18:37.213 "config": [] 00:18:37.213 } 00:18:37.213 ] 00:18:37.213 }' 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2808466 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808466 ']' 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808466 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.213 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808466 00:18:37.470 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:37.470 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:37.470 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808466' 00:18:37.470 killing process with pid 2808466 00:18:37.470 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808466 00:18:37.470 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.470 00:18:37.470 Latency(us) 00:18:37.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.470 =================================================================================================================== 00:18:37.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.470 [2024-07-24 18:01:23.489277] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:37.470 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808466 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2808188 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808188 ']' 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808188 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808188 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808188' 00:18:37.727 killing process with pid 2808188 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808188 00:18:37.727 [2024-07-24 18:01:23.774349] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:37.727 18:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808188 00:18:37.984 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:37.984 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.985 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:37.985 "subsystems": [ 00:18:37.985 { 00:18:37.985 "subsystem": "keyring", 00:18:37.985 "config": [] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "iobuf", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "iobuf_set_options", 00:18:37.985 "params": { 00:18:37.985 "small_pool_count": 8192, 00:18:37.985 "large_pool_count": 1024, 00:18:37.985 "small_bufsize": 8192, 00:18:37.985 "large_bufsize": 135168 00:18:37.985 } 00:18:37.985 } 00:18:37.985 ] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "sock", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "sock_set_default_impl", 00:18:37.985 "params": { 00:18:37.985 "impl_name": "posix" 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "sock_impl_set_options", 00:18:37.985 "params": { 00:18:37.985 "impl_name": "ssl", 00:18:37.985 "recv_buf_size": 4096, 00:18:37.985 "send_buf_size": 4096, 00:18:37.985 "enable_recv_pipe": true, 00:18:37.985 "enable_quickack": false, 00:18:37.985 "enable_placement_id": 0, 00:18:37.985 "enable_zerocopy_send_server": true, 00:18:37.985 "enable_zerocopy_send_client": false, 00:18:37.985 "zerocopy_threshold": 0, 00:18:37.985 "tls_version": 0, 00:18:37.985 "enable_ktls": false 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "sock_impl_set_options", 00:18:37.985 "params": { 00:18:37.985 "impl_name": "posix", 00:18:37.985 "recv_buf_size": 2097152, 00:18:37.985 "send_buf_size": 2097152, 00:18:37.985 "enable_recv_pipe": true, 00:18:37.985 "enable_quickack": false, 00:18:37.985 "enable_placement_id": 0, 00:18:37.985 "enable_zerocopy_send_server": true, 00:18:37.985 "enable_zerocopy_send_client": false, 00:18:37.985 "zerocopy_threshold": 0, 00:18:37.985 "tls_version": 0, 00:18:37.985 "enable_ktls": false 00:18:37.985 } 00:18:37.985 } 00:18:37.985 ] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "vmd", 00:18:37.985 "config": [] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "accel", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "accel_set_options", 00:18:37.985 "params": { 00:18:37.985 "small_cache_size": 128, 00:18:37.985 "large_cache_size": 16, 00:18:37.985 "task_count": 2048, 00:18:37.985 "sequence_count": 2048, 00:18:37.985 "buf_count": 2048 00:18:37.985 } 00:18:37.985 } 00:18:37.985 ] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "bdev", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "bdev_set_options", 00:18:37.985 "params": { 00:18:37.985 "bdev_io_pool_size": 65535, 00:18:37.985 "bdev_io_cache_size": 256, 00:18:37.985 "bdev_auto_examine": true, 00:18:37.985 "iobuf_small_cache_size": 128, 00:18:37.985 "iobuf_large_cache_size": 16 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_raid_set_options", 00:18:37.985 "params": { 00:18:37.985 "process_window_size_kb": 1024, 00:18:37.985 "process_max_bandwidth_mb_sec": 0 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_iscsi_set_options", 00:18:37.985 "params": { 00:18:37.985 "timeout_sec": 30 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_nvme_set_options", 00:18:37.985 "params": { 00:18:37.985 "action_on_timeout": "none", 00:18:37.985 "timeout_us": 0, 00:18:37.985 "timeout_admin_us": 0, 00:18:37.985 "keep_alive_timeout_ms": 10000, 00:18:37.985 "arbitration_burst": 0, 00:18:37.985 "low_priority_weight": 0, 00:18:37.985 "medium_priority_weight": 0, 00:18:37.985 "high_priority_weight": 0, 00:18:37.985 "nvme_adminq_poll_period_us": 10000, 00:18:37.985 "nvme_ioq_poll_period_us": 0, 00:18:37.985 "io_queue_requests": 0, 00:18:37.985 "delay_cmd_submit": true, 00:18:37.985 "transport_retry_count": 4, 00:18:37.985 "bdev_retry_count": 3, 00:18:37.985 "transport_ack_timeout": 0, 00:18:37.985 "ctrlr_loss_timeout_sec": 0, 00:18:37.985 "reconnect_delay_sec": 0, 00:18:37.985 "fast_io_fail_timeout_sec": 0, 00:18:37.985 "disable_auto_failback": false, 00:18:37.985 "generate_uuids": false, 00:18:37.985 "transport_tos": 0, 00:18:37.985 "nvme_error_stat": false, 00:18:37.985 "rdma_srq_size": 0, 00:18:37.985 "io_path_stat": false, 00:18:37.985 "allow_accel_sequence": false, 00:18:37.985 "rdma_max_cq_size": 0, 00:18:37.985 "rdma_cm_event_timeout_ms": 0, 00:18:37.985 "dhchap_digests": [ 00:18:37.985 "sha256", 00:18:37.985 "sha384", 00:18:37.985 "sha512" 00:18:37.985 ], 00:18:37.985 "dhchap_dhgroups": [ 00:18:37.985 "null", 00:18:37.985 "ffdhe2048", 00:18:37.985 "ffdhe3072", 00:18:37.985 "ffdhe4096", 00:18:37.985 "ffdhe6144", 00:18:37.985 "ffdhe8192" 00:18:37.985 ] 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_nvme_set_hotplug", 00:18:37.985 "params": { 00:18:37.985 "period_us": 100000, 00:18:37.985 "enable": false 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_malloc_create", 00:18:37.985 "params": { 00:18:37.985 "name": "malloc0", 00:18:37.985 "num_blocks": 8192, 00:18:37.985 "block_size": 4096, 00:18:37.985 "physical_block_size": 4096, 00:18:37.985 "uuid": "53ea43c6-45ea-41a5-8544-a181dc5944cc", 00:18:37.985 "optimal_io_boundary": 0, 00:18:37.985 "md_size": 0, 00:18:37.985 "dif_type": 0, 00:18:37.985 "dif_is_head_of_md": false, 00:18:37.985 "dif_pi_format": 0 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "bdev_wait_for_examine" 00:18:37.985 } 00:18:37.985 ] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "nbd", 00:18:37.985 "config": [] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "scheduler", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "framework_set_scheduler", 00:18:37.985 "params": { 00:18:37.985 "name": "static" 00:18:37.985 } 00:18:37.985 } 00:18:37.985 ] 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "subsystem": "nvmf", 00:18:37.985 "config": [ 00:18:37.985 { 00:18:37.985 "method": "nvmf_set_config", 00:18:37.985 "params": { 00:18:37.985 "discovery_filter": "match_any", 00:18:37.985 "admin_cmd_passthru": { 00:18:37.985 "identify_ctrlr": false 00:18:37.985 } 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "nvmf_set_max_subsystems", 00:18:37.985 "params": { 00:18:37.985 "max_subsystems": 1024 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "nvmf_set_crdt", 00:18:37.985 "params": { 00:18:37.985 "crdt1": 0, 00:18:37.985 "crdt2": 0, 00:18:37.985 "crdt3": 0 00:18:37.985 } 00:18:37.985 }, 00:18:37.985 { 00:18:37.985 "method": "nvmf_create_transport", 00:18:37.985 "params": { 00:18:37.985 "trtype": "TCP", 00:18:37.985 "max_queue_depth": 128, 00:18:37.985 "max_io_qpairs_per_ctrlr": 127, 00:18:37.985 "in_capsule_data_size": 4096, 00:18:37.985 "max_io_size": 131072, 00:18:37.985 "io_unit_size": 131072, 00:18:37.985 "max_aq_depth": 128, 00:18:37.985 "num_shared_buffers": 511, 00:18:37.985 "buf_cache_size": 4294967295, 00:18:37.985 "dif_insert_or_strip": false, 00:18:37.985 "zcopy": false, 00:18:37.985 "c2h_success": false, 00:18:37.985 "sock_priority": 0, 00:18:37.985 "abort_timeout_sec": 1, 00:18:37.985 "ack_timeout": 0, 00:18:37.985 "data_wr_pool_size": 0 00:18:37.985 } 00:18:37.985 }, 00:18:37.986 { 00:18:37.986 "method": "nvmf_create_subsystem", 00:18:37.986 "params": { 00:18:37.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.986 "allow_any_host": false, 00:18:37.986 "serial_number": "SPDK00000000000001", 00:18:37.986 "model_number": "SPDK bdev Controller", 00:18:37.986 "max_namespaces": 10, 00:18:37.986 "min_cntlid": 1, 00:18:37.986 "max_cntlid": 65519, 00:18:37.986 "ana_reporting": false 00:18:37.986 } 00:18:37.986 }, 00:18:37.986 { 00:18:37.986 "method": "nvmf_subsystem_add_host", 00:18:37.986 "params": { 00:18:37.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.986 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.986 "psk": "/tmp/tmp.T1x8jMk37A" 00:18:37.986 } 00:18:37.986 }, 00:18:37.986 { 00:18:37.986 "method": "nvmf_subsystem_add_ns", 00:18:37.986 "params": { 00:18:37.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.986 "namespace": { 00:18:37.986 "nsid": 1, 00:18:37.986 "bdev_name": "malloc0", 00:18:37.986 "nguid": "53EA43C645EA41A58544A181DC5944CC", 00:18:37.986 "uuid": "53ea43c6-45ea-41a5-8544-a181dc5944cc", 00:18:37.986 "no_auto_visible": false 00:18:37.986 } 00:18:37.986 } 00:18:37.986 }, 00:18:37.986 { 00:18:37.986 "method": "nvmf_subsystem_add_listener", 00:18:37.986 "params": { 00:18:37.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.986 "listen_address": { 00:18:37.986 "trtype": "TCP", 00:18:37.986 "adrfam": "IPv4", 00:18:37.986 "traddr": "10.0.0.2", 00:18:37.986 "trsvcid": "4420" 00:18:37.986 }, 00:18:37.986 "secure_channel": true 00:18:37.986 } 00:18:37.986 } 00:18:37.986 ] 00:18:37.986 } 00:18:37.986 ] 00:18:37.986 }' 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2808746 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2808746 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808746 ']' 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.986 18:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.986 [2024-07-24 18:01:24.126980] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:37.986 [2024-07-24 18:01:24.127073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.986 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.986 [2024-07-24 18:01:24.191701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.243 [2024-07-24 18:01:24.305034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.243 [2024-07-24 18:01:24.305087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.243 [2024-07-24 18:01:24.305108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.243 [2024-07-24 18:01:24.305137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.243 [2024-07-24 18:01:24.305148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.243 [2024-07-24 18:01:24.305236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.501 [2024-07-24 18:01:24.541820] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.501 [2024-07-24 18:01:24.571474] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:38.501 [2024-07-24 18:01:24.587524] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.501 [2024-07-24 18:01:24.587739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2808894 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2808894 /var/tmp/bdevperf.sock 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2808894 ']' 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.068 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:39.068 "subsystems": [ 00:18:39.068 { 00:18:39.068 "subsystem": "keyring", 00:18:39.068 "config": [] 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "subsystem": "iobuf", 00:18:39.068 "config": [ 00:18:39.068 { 00:18:39.068 "method": "iobuf_set_options", 00:18:39.068 "params": { 00:18:39.068 "small_pool_count": 8192, 00:18:39.068 "large_pool_count": 1024, 00:18:39.068 "small_bufsize": 8192, 00:18:39.068 "large_bufsize": 135168 00:18:39.068 } 00:18:39.068 } 00:18:39.068 ] 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "subsystem": "sock", 00:18:39.068 "config": [ 00:18:39.068 { 00:18:39.068 "method": "sock_set_default_impl", 00:18:39.068 "params": { 00:18:39.068 "impl_name": "posix" 00:18:39.068 } 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "method": "sock_impl_set_options", 00:18:39.068 "params": { 00:18:39.068 "impl_name": "ssl", 00:18:39.068 "recv_buf_size": 4096, 00:18:39.068 "send_buf_size": 4096, 00:18:39.068 "enable_recv_pipe": true, 00:18:39.068 "enable_quickack": false, 00:18:39.068 "enable_placement_id": 0, 00:18:39.068 "enable_zerocopy_send_server": true, 00:18:39.068 "enable_zerocopy_send_client": false, 00:18:39.068 "zerocopy_threshold": 0, 00:18:39.068 "tls_version": 0, 00:18:39.068 "enable_ktls": false 00:18:39.068 } 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "method": "sock_impl_set_options", 00:18:39.068 "params": { 00:18:39.068 "impl_name": "posix", 00:18:39.068 "recv_buf_size": 2097152, 00:18:39.068 "send_buf_size": 2097152, 00:18:39.068 "enable_recv_pipe": true, 00:18:39.068 "enable_quickack": false, 00:18:39.068 "enable_placement_id": 0, 00:18:39.068 "enable_zerocopy_send_server": true, 00:18:39.068 "enable_zerocopy_send_client": false, 00:18:39.068 "zerocopy_threshold": 0, 00:18:39.068 "tls_version": 0, 00:18:39.068 "enable_ktls": false 00:18:39.068 } 00:18:39.068 } 00:18:39.068 ] 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "subsystem": "vmd", 00:18:39.068 "config": [] 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "subsystem": "accel", 00:18:39.068 "config": [ 00:18:39.068 { 00:18:39.068 "method": "accel_set_options", 00:18:39.068 "params": { 00:18:39.068 "small_cache_size": 128, 00:18:39.068 "large_cache_size": 16, 00:18:39.068 "task_count": 2048, 00:18:39.068 "sequence_count": 2048, 00:18:39.068 "buf_count": 2048 00:18:39.068 } 00:18:39.068 } 00:18:39.068 ] 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "subsystem": "bdev", 00:18:39.068 "config": [ 00:18:39.068 { 00:18:39.068 "method": "bdev_set_options", 00:18:39.068 "params": { 00:18:39.068 "bdev_io_pool_size": 65535, 00:18:39.068 "bdev_io_cache_size": 256, 00:18:39.068 "bdev_auto_examine": true, 00:18:39.068 "iobuf_small_cache_size": 128, 00:18:39.068 "iobuf_large_cache_size": 16 00:18:39.068 } 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "method": "bdev_raid_set_options", 00:18:39.068 "params": { 00:18:39.068 "process_window_size_kb": 1024, 00:18:39.068 "process_max_bandwidth_mb_sec": 0 00:18:39.068 } 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "method": "bdev_iscsi_set_options", 00:18:39.068 "params": { 00:18:39.068 "timeout_sec": 30 00:18:39.068 } 00:18:39.068 }, 00:18:39.068 { 00:18:39.068 "method": "bdev_nvme_set_options", 00:18:39.068 "params": { 00:18:39.068 "action_on_timeout": "none", 00:18:39.068 "timeout_us": 0, 00:18:39.068 "timeout_admin_us": 0, 00:18:39.068 "keep_alive_timeout_ms": 10000, 00:18:39.068 "arbitration_burst": 0, 00:18:39.068 "low_priority_weight": 0, 00:18:39.068 "medium_priority_weight": 0, 00:18:39.068 "high_priority_weight": 0, 00:18:39.068 "nvme_adminq_poll_period_us": 10000, 00:18:39.068 "nvme_ioq_poll_period_us": 0, 00:18:39.068 "io_queue_requests": 512, 00:18:39.068 "delay_cmd_submit": true, 00:18:39.068 "transport_retry_count": 4, 00:18:39.068 "bdev_retry_count": 3, 00:18:39.068 "transport_ack_timeout": 0, 00:18:39.068 "ctrlr_loss_timeout_sec": 0, 00:18:39.068 "reconnect_delay_sec": 0, 00:18:39.068 "fast_io_fail_timeout_sec": 0, 00:18:39.068 "disable_auto_failback": false, 00:18:39.068 "generate_uuids": false, 00:18:39.068 "transport_tos": 0, 00:18:39.068 "nvme_error_stat": false, 00:18:39.068 "rdma_srq_size": 0, 00:18:39.068 "io_path_stat": false, 00:18:39.068 "allow_accel_sequence": false, 00:18:39.068 "rdma_max_cq_size": 0, 00:18:39.068 "rdma_cm_event_timeout_ms": 0, 00:18:39.068 "dhchap_digests": [ 00:18:39.068 "sha256", 00:18:39.069 "sha384", 00:18:39.069 "sha512" 00:18:39.069 ], 00:18:39.069 "dhchap_dhgroups": [ 00:18:39.069 "null", 00:18:39.069 "ffdhe2048", 00:18:39.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.069 "ffdhe3072", 00:18:39.069 "ffdhe4096", 00:18:39.069 "ffdhe6144", 00:18:39.069 "ffdhe8192" 00:18:39.069 ] 00:18:39.069 } 00:18:39.069 }, 00:18:39.069 { 00:18:39.069 "method": "bdev_nvme_attach_controller", 00:18:39.069 "params": { 00:18:39.069 "name": "TLSTEST", 00:18:39.069 "trtype": "TCP", 00:18:39.069 "adrfam": "IPv4", 00:18:39.069 "traddr": "10.0.0.2", 00:18:39.069 "trsvcid": "4420", 00:18:39.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.069 "prchk_reftag": false, 00:18:39.069 "prchk_guard": false, 00:18:39.069 "ctrlr_loss_timeout_sec": 0, 00:18:39.069 "reconnect_delay_sec": 0, 00:18:39.069 "fast_io_fail_timeout_sec": 0, 00:18:39.069 "psk": "/tmp/tmp.T1x8jMk37A", 00:18:39.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.069 "hdgst": false, 00:18:39.069 "ddgst": false 00:18:39.069 } 00:18:39.069 }, 00:18:39.069 { 00:18:39.069 "method": "bdev_nvme_set_hotplug", 00:18:39.069 "params": { 00:18:39.069 "period_us": 100000, 00:18:39.069 "enable": false 00:18:39.069 } 00:18:39.069 }, 00:18:39.069 { 00:18:39.069 "method": "bdev_wait_for_examine" 00:18:39.069 } 00:18:39.069 ] 00:18:39.069 }, 00:18:39.069 { 00:18:39.069 "subsystem": "nbd", 00:18:39.069 "config": [] 00:18:39.069 } 00:18:39.069 ] 00:18:39.069 }' 00:18:39.069 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.069 18:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.069 [2024-07-24 18:01:25.200092] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:39.069 [2024-07-24 18:01:25.200203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808894 ] 00:18:39.069 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.069 [2024-07-24 18:01:25.258565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.327 [2024-07-24 18:01:25.364942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.327 [2024-07-24 18:01:25.536064] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.327 [2024-07-24 18:01:25.536217] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:40.262 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.262 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.262 18:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:40.262 Running I/O for 10 seconds... 00:18:50.225 00:18:50.225 Latency(us) 00:18:50.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.225 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.225 Verification LBA range: start 0x0 length 0x2000 00:18:50.225 TLSTESTn1 : 10.04 3004.04 11.73 0.00 0.00 42501.57 6602.15 62137.84 00:18:50.225 =================================================================================================================== 00:18:50.225 Total : 3004.04 11.73 0.00 0.00 42501.57 6602.15 62137.84 00:18:50.225 0 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2808894 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808894 ']' 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808894 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808894 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808894' 00:18:50.225 killing process with pid 2808894 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808894 00:18:50.225 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.225 00:18:50.225 Latency(us) 00:18:50.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.225 =================================================================================================================== 00:18:50.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.225 [2024-07-24 18:01:36.413667] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:50.225 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808894 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2808746 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2808746 ']' 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2808746 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2808746 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2808746' 00:18:50.483 killing process with pid 2808746 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2808746 00:18:50.483 [2024-07-24 18:01:36.682720] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:50.483 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2808746 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2810231 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2810231 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2810231 ']' 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.740 18:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.998 [2024-07-24 18:01:37.027173] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:50.998 [2024-07-24 18:01:37.027258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.998 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.999 [2024-07-24 18:01:37.098462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.999 [2024-07-24 18:01:37.220055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.999 [2024-07-24 18:01:37.220123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.999 [2024-07-24 18:01:37.220141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.999 [2024-07-24 18:01:37.220155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.999 [2024-07-24 18:01:37.220181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.999 [2024-07-24 18:01:37.220217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.T1x8jMk37A 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T1x8jMk37A 00:18:51.256 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.513 [2024-07-24 18:01:37.637842] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.513 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.771 18:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:52.029 [2024-07-24 18:01:38.123137] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.029 [2024-07-24 18:01:38.123360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.029 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.287 malloc0 00:18:52.287 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.545 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T1x8jMk37A 00:18:52.802 [2024-07-24 18:01:38.946039] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:52.802 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2810510 00:18:52.802 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2810510 /var/tmp/bdevperf.sock 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2810510 ']' 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.803 18:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.803 [2024-07-24 18:01:39.011148] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:52.803 [2024-07-24 18:01:39.011244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810510 ] 00:18:52.803 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.061 [2024-07-24 18:01:39.073290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.061 [2024-07-24 18:01:39.190267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.061 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.061 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.061 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.T1x8jMk37A 00:18:53.318 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.576 [2024-07-24 18:01:39.791330] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.833 nvme0n1 00:18:53.834 18:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.834 Running I/O for 1 seconds... 00:18:55.205 00:18:55.205 Latency(us) 00:18:55.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.205 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.205 Verification LBA range: start 0x0 length 0x2000 00:18:55.205 nvme0n1 : 1.04 2047.45 8.00 0.00 0.00 61560.95 8641.04 86216.25 00:18:55.205 =================================================================================================================== 00:18:55.205 Total : 2047.45 8.00 0.00 0.00 61560.95 8641.04 86216.25 00:18:55.205 0 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2810510 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2810510 ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2810510 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810510 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810510' 00:18:55.205 killing process with pid 2810510 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2810510 00:18:55.205 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.205 00:18:55.205 Latency(us) 00:18:55.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.205 =================================================================================================================== 00:18:55.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2810510 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2810231 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2810231 ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2810231 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810231 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810231' 00:18:55.205 killing process with pid 2810231 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2810231 00:18:55.205 [2024-07-24 18:01:41.368309] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:55.205 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2810231 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2810795 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2810795 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2810795 ']' 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.463 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.463 [2024-07-24 18:01:41.686674] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:55.463 [2024-07-24 18:01:41.686771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.463 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.720 [2024-07-24 18:01:41.750072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.720 [2024-07-24 18:01:41.860282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.720 [2024-07-24 18:01:41.860337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.720 [2024-07-24 18:01:41.860351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.720 [2024-07-24 18:01:41.860364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.721 [2024-07-24 18:01:41.860390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.721 [2024-07-24 18:01:41.860418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.721 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.721 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.721 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.721 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.721 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.978 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.978 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:55.978 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.978 18:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.978 [2024-07-24 18:01:42.006119] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.978 malloc0 00:18:55.978 [2024-07-24 18:01:42.039312] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.978 [2024-07-24 18:01:42.053300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2810936 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2810936 /var/tmp/bdevperf.sock 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2810936 ']' 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.978 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.978 [2024-07-24 18:01:42.119529] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:55.978 [2024-07-24 18:01:42.119586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810936 ] 00:18:55.978 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.978 [2024-07-24 18:01:42.179740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.236 [2024-07-24 18:01:42.297623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.236 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.236 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:56.236 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.T1x8jMk37A 00:18:56.494 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.752 [2024-07-24 18:01:42.897136] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.752 nvme0n1 00:18:56.752 18:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.010 Running I/O for 1 seconds... 00:18:58.003 00:18:58.003 Latency(us) 00:18:58.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.003 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.003 Verification LBA range: start 0x0 length 0x2000 00:18:58.003 nvme0n1 : 1.04 2889.02 11.29 0.00 0.00 43499.33 5971.06 78449.02 00:18:58.003 =================================================================================================================== 00:18:58.003 Total : 2889.02 11.29 0.00 0.00 43499.33 5971.06 78449.02 00:18:58.003 0 00:18:58.003 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:58.003 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.003 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.261 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.261 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:58.261 "subsystems": [ 00:18:58.261 { 00:18:58.261 "subsystem": "keyring", 00:18:58.261 "config": [ 00:18:58.261 { 00:18:58.261 "method": "keyring_file_add_key", 00:18:58.261 "params": { 00:18:58.261 "name": "key0", 00:18:58.261 "path": "/tmp/tmp.T1x8jMk37A" 00:18:58.261 } 00:18:58.261 } 00:18:58.261 ] 00:18:58.261 }, 00:18:58.261 { 00:18:58.261 "subsystem": "iobuf", 00:18:58.261 "config": [ 00:18:58.261 { 00:18:58.261 "method": "iobuf_set_options", 00:18:58.261 "params": { 00:18:58.261 "small_pool_count": 8192, 00:18:58.261 "large_pool_count": 1024, 00:18:58.261 "small_bufsize": 8192, 00:18:58.261 "large_bufsize": 135168 00:18:58.261 } 00:18:58.261 } 00:18:58.261 ] 00:18:58.261 }, 00:18:58.261 { 00:18:58.262 "subsystem": "sock", 00:18:58.262 "config": [ 00:18:58.262 { 00:18:58.262 "method": "sock_set_default_impl", 00:18:58.262 "params": { 00:18:58.262 "impl_name": "posix" 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "sock_impl_set_options", 00:18:58.262 "params": { 00:18:58.262 "impl_name": "ssl", 00:18:58.262 "recv_buf_size": 4096, 00:18:58.262 "send_buf_size": 4096, 00:18:58.262 "enable_recv_pipe": true, 00:18:58.262 "enable_quickack": false, 00:18:58.262 "enable_placement_id": 0, 00:18:58.262 "enable_zerocopy_send_server": true, 00:18:58.262 "enable_zerocopy_send_client": false, 00:18:58.262 "zerocopy_threshold": 0, 00:18:58.262 "tls_version": 0, 00:18:58.262 "enable_ktls": false 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "sock_impl_set_options", 00:18:58.262 "params": { 00:18:58.262 "impl_name": "posix", 00:18:58.262 "recv_buf_size": 2097152, 00:18:58.262 "send_buf_size": 2097152, 00:18:58.262 "enable_recv_pipe": true, 00:18:58.262 "enable_quickack": false, 00:18:58.262 "enable_placement_id": 0, 00:18:58.262 "enable_zerocopy_send_server": true, 00:18:58.262 "enable_zerocopy_send_client": false, 00:18:58.262 "zerocopy_threshold": 0, 00:18:58.262 "tls_version": 0, 00:18:58.262 "enable_ktls": false 00:18:58.262 } 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "vmd", 00:18:58.262 "config": [] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "accel", 00:18:58.262 "config": [ 00:18:58.262 { 00:18:58.262 "method": "accel_set_options", 00:18:58.262 "params": { 00:18:58.262 "small_cache_size": 128, 00:18:58.262 "large_cache_size": 16, 00:18:58.262 "task_count": 2048, 00:18:58.262 "sequence_count": 2048, 00:18:58.262 "buf_count": 2048 00:18:58.262 } 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "bdev", 00:18:58.262 "config": [ 00:18:58.262 { 00:18:58.262 "method": "bdev_set_options", 00:18:58.262 "params": { 00:18:58.262 "bdev_io_pool_size": 65535, 00:18:58.262 "bdev_io_cache_size": 256, 00:18:58.262 "bdev_auto_examine": true, 00:18:58.262 "iobuf_small_cache_size": 128, 00:18:58.262 "iobuf_large_cache_size": 16 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_raid_set_options", 00:18:58.262 "params": { 00:18:58.262 "process_window_size_kb": 1024, 00:18:58.262 "process_max_bandwidth_mb_sec": 0 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_iscsi_set_options", 00:18:58.262 "params": { 00:18:58.262 "timeout_sec": 30 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_nvme_set_options", 00:18:58.262 "params": { 00:18:58.262 "action_on_timeout": "none", 00:18:58.262 "timeout_us": 0, 00:18:58.262 "timeout_admin_us": 0, 00:18:58.262 "keep_alive_timeout_ms": 10000, 00:18:58.262 "arbitration_burst": 0, 00:18:58.262 "low_priority_weight": 0, 00:18:58.262 "medium_priority_weight": 0, 00:18:58.262 "high_priority_weight": 0, 00:18:58.262 "nvme_adminq_poll_period_us": 10000, 00:18:58.262 "nvme_ioq_poll_period_us": 0, 00:18:58.262 "io_queue_requests": 0, 00:18:58.262 "delay_cmd_submit": true, 00:18:58.262 "transport_retry_count": 4, 00:18:58.262 "bdev_retry_count": 3, 00:18:58.262 "transport_ack_timeout": 0, 00:18:58.262 "ctrlr_loss_timeout_sec": 0, 00:18:58.262 "reconnect_delay_sec": 0, 00:18:58.262 "fast_io_fail_timeout_sec": 0, 00:18:58.262 "disable_auto_failback": false, 00:18:58.262 "generate_uuids": false, 00:18:58.262 "transport_tos": 0, 00:18:58.262 "nvme_error_stat": false, 00:18:58.262 "rdma_srq_size": 0, 00:18:58.262 "io_path_stat": false, 00:18:58.262 "allow_accel_sequence": false, 00:18:58.262 "rdma_max_cq_size": 0, 00:18:58.262 "rdma_cm_event_timeout_ms": 0, 00:18:58.262 "dhchap_digests": [ 00:18:58.262 "sha256", 00:18:58.262 "sha384", 00:18:58.262 "sha512" 00:18:58.262 ], 00:18:58.262 "dhchap_dhgroups": [ 00:18:58.262 "null", 00:18:58.262 "ffdhe2048", 00:18:58.262 "ffdhe3072", 00:18:58.262 "ffdhe4096", 00:18:58.262 "ffdhe6144", 00:18:58.262 "ffdhe8192" 00:18:58.262 ] 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_nvme_set_hotplug", 00:18:58.262 "params": { 00:18:58.262 "period_us": 100000, 00:18:58.262 "enable": false 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_malloc_create", 00:18:58.262 "params": { 00:18:58.262 "name": "malloc0", 00:18:58.262 "num_blocks": 8192, 00:18:58.262 "block_size": 4096, 00:18:58.262 "physical_block_size": 4096, 00:18:58.262 "uuid": "3a0f28e7-09ae-4a7f-ac9d-cf78113af3ec", 00:18:58.262 "optimal_io_boundary": 0, 00:18:58.262 "md_size": 0, 00:18:58.262 "dif_type": 0, 00:18:58.262 "dif_is_head_of_md": false, 00:18:58.262 "dif_pi_format": 0 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "bdev_wait_for_examine" 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "nbd", 00:18:58.262 "config": [] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "scheduler", 00:18:58.262 "config": [ 00:18:58.262 { 00:18:58.262 "method": "framework_set_scheduler", 00:18:58.262 "params": { 00:18:58.262 "name": "static" 00:18:58.262 } 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "subsystem": "nvmf", 00:18:58.262 "config": [ 00:18:58.262 { 00:18:58.262 "method": "nvmf_set_config", 00:18:58.262 "params": { 00:18:58.262 "discovery_filter": "match_any", 00:18:58.262 "admin_cmd_passthru": { 00:18:58.262 "identify_ctrlr": false 00:18:58.262 } 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_set_max_subsystems", 00:18:58.262 "params": { 00:18:58.262 "max_subsystems": 1024 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_set_crdt", 00:18:58.262 "params": { 00:18:58.262 "crdt1": 0, 00:18:58.262 "crdt2": 0, 00:18:58.262 "crdt3": 0 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_create_transport", 00:18:58.262 "params": { 00:18:58.262 "trtype": "TCP", 00:18:58.262 "max_queue_depth": 128, 00:18:58.262 "max_io_qpairs_per_ctrlr": 127, 00:18:58.262 "in_capsule_data_size": 4096, 00:18:58.262 "max_io_size": 131072, 00:18:58.262 "io_unit_size": 131072, 00:18:58.262 "max_aq_depth": 128, 00:18:58.262 "num_shared_buffers": 511, 00:18:58.262 "buf_cache_size": 4294967295, 00:18:58.262 "dif_insert_or_strip": false, 00:18:58.262 "zcopy": false, 00:18:58.262 "c2h_success": false, 00:18:58.262 "sock_priority": 0, 00:18:58.262 "abort_timeout_sec": 1, 00:18:58.262 "ack_timeout": 0, 00:18:58.262 "data_wr_pool_size": 0 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_create_subsystem", 00:18:58.262 "params": { 00:18:58.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.262 "allow_any_host": false, 00:18:58.262 "serial_number": "00000000000000000000", 00:18:58.262 "model_number": "SPDK bdev Controller", 00:18:58.262 "max_namespaces": 32, 00:18:58.262 "min_cntlid": 1, 00:18:58.262 "max_cntlid": 65519, 00:18:58.262 "ana_reporting": false 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_subsystem_add_host", 00:18:58.262 "params": { 00:18:58.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.262 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.262 "psk": "key0" 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_subsystem_add_ns", 00:18:58.262 "params": { 00:18:58.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.262 "namespace": { 00:18:58.262 "nsid": 1, 00:18:58.262 "bdev_name": "malloc0", 00:18:58.262 "nguid": "3A0F28E709AE4A7FAC9DCF78113AF3EC", 00:18:58.262 "uuid": "3a0f28e7-09ae-4a7f-ac9d-cf78113af3ec", 00:18:58.262 "no_auto_visible": false 00:18:58.262 } 00:18:58.262 } 00:18:58.262 }, 00:18:58.262 { 00:18:58.262 "method": "nvmf_subsystem_add_listener", 00:18:58.262 "params": { 00:18:58.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.262 "listen_address": { 00:18:58.262 "trtype": "TCP", 00:18:58.262 "adrfam": "IPv4", 00:18:58.262 "traddr": "10.0.0.2", 00:18:58.262 "trsvcid": "4420" 00:18:58.262 }, 00:18:58.262 "secure_channel": false, 00:18:58.262 "sock_impl": "ssl" 00:18:58.262 } 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 } 00:18:58.262 ] 00:18:58.262 }' 00:18:58.262 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:58.521 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:58.521 "subsystems": [ 00:18:58.521 { 00:18:58.521 "subsystem": "keyring", 00:18:58.521 "config": [ 00:18:58.521 { 00:18:58.521 "method": "keyring_file_add_key", 00:18:58.521 "params": { 00:18:58.521 "name": "key0", 00:18:58.521 "path": "/tmp/tmp.T1x8jMk37A" 00:18:58.521 } 00:18:58.521 } 00:18:58.521 ] 00:18:58.521 }, 00:18:58.521 { 00:18:58.521 "subsystem": "iobuf", 00:18:58.521 "config": [ 00:18:58.521 { 00:18:58.521 "method": "iobuf_set_options", 00:18:58.521 "params": { 00:18:58.521 "small_pool_count": 8192, 00:18:58.521 "large_pool_count": 1024, 00:18:58.521 "small_bufsize": 8192, 00:18:58.521 "large_bufsize": 135168 00:18:58.521 } 00:18:58.521 } 00:18:58.521 ] 00:18:58.521 }, 00:18:58.521 { 00:18:58.521 "subsystem": "sock", 00:18:58.521 "config": [ 00:18:58.521 { 00:18:58.521 "method": "sock_set_default_impl", 00:18:58.521 "params": { 00:18:58.521 "impl_name": "posix" 00:18:58.521 } 00:18:58.521 }, 00:18:58.521 { 00:18:58.521 "method": "sock_impl_set_options", 00:18:58.521 "params": { 00:18:58.521 "impl_name": "ssl", 00:18:58.521 "recv_buf_size": 4096, 00:18:58.521 "send_buf_size": 4096, 00:18:58.521 "enable_recv_pipe": true, 00:18:58.521 "enable_quickack": false, 00:18:58.521 "enable_placement_id": 0, 00:18:58.521 "enable_zerocopy_send_server": true, 00:18:58.521 "enable_zerocopy_send_client": false, 00:18:58.522 "zerocopy_threshold": 0, 00:18:58.522 "tls_version": 0, 00:18:58.522 "enable_ktls": false 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "sock_impl_set_options", 00:18:58.522 "params": { 00:18:58.522 "impl_name": "posix", 00:18:58.522 "recv_buf_size": 2097152, 00:18:58.522 "send_buf_size": 2097152, 00:18:58.522 "enable_recv_pipe": true, 00:18:58.522 "enable_quickack": false, 00:18:58.522 "enable_placement_id": 0, 00:18:58.522 "enable_zerocopy_send_server": true, 00:18:58.522 "enable_zerocopy_send_client": false, 00:18:58.522 "zerocopy_threshold": 0, 00:18:58.522 "tls_version": 0, 00:18:58.522 "enable_ktls": false 00:18:58.522 } 00:18:58.522 } 00:18:58.522 ] 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "subsystem": "vmd", 00:18:58.522 "config": [] 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "subsystem": "accel", 00:18:58.522 "config": [ 00:18:58.522 { 00:18:58.522 "method": "accel_set_options", 00:18:58.522 "params": { 00:18:58.522 "small_cache_size": 128, 00:18:58.522 "large_cache_size": 16, 00:18:58.522 "task_count": 2048, 00:18:58.522 "sequence_count": 2048, 00:18:58.522 "buf_count": 2048 00:18:58.522 } 00:18:58.522 } 00:18:58.522 ] 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "subsystem": "bdev", 00:18:58.522 "config": [ 00:18:58.522 { 00:18:58.522 "method": "bdev_set_options", 00:18:58.522 "params": { 00:18:58.522 "bdev_io_pool_size": 65535, 00:18:58.522 "bdev_io_cache_size": 256, 00:18:58.522 "bdev_auto_examine": true, 00:18:58.522 "iobuf_small_cache_size": 128, 00:18:58.522 "iobuf_large_cache_size": 16 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_raid_set_options", 00:18:58.522 "params": { 00:18:58.522 "process_window_size_kb": 1024, 00:18:58.522 "process_max_bandwidth_mb_sec": 0 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_iscsi_set_options", 00:18:58.522 "params": { 00:18:58.522 "timeout_sec": 30 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_nvme_set_options", 00:18:58.522 "params": { 00:18:58.522 "action_on_timeout": "none", 00:18:58.522 "timeout_us": 0, 00:18:58.522 "timeout_admin_us": 0, 00:18:58.522 "keep_alive_timeout_ms": 10000, 00:18:58.522 "arbitration_burst": 0, 00:18:58.522 "low_priority_weight": 0, 00:18:58.522 "medium_priority_weight": 0, 00:18:58.522 "high_priority_weight": 0, 00:18:58.522 "nvme_adminq_poll_period_us": 10000, 00:18:58.522 "nvme_ioq_poll_period_us": 0, 00:18:58.522 "io_queue_requests": 512, 00:18:58.522 "delay_cmd_submit": true, 00:18:58.522 "transport_retry_count": 4, 00:18:58.522 "bdev_retry_count": 3, 00:18:58.522 "transport_ack_timeout": 0, 00:18:58.522 "ctrlr_loss_timeout_sec": 0, 00:18:58.522 "reconnect_delay_sec": 0, 00:18:58.522 "fast_io_fail_timeout_sec": 0, 00:18:58.522 "disable_auto_failback": false, 00:18:58.522 "generate_uuids": false, 00:18:58.522 "transport_tos": 0, 00:18:58.522 "nvme_error_stat": false, 00:18:58.522 "rdma_srq_size": 0, 00:18:58.522 "io_path_stat": false, 00:18:58.522 "allow_accel_sequence": false, 00:18:58.522 "rdma_max_cq_size": 0, 00:18:58.522 "rdma_cm_event_timeout_ms": 0, 00:18:58.522 "dhchap_digests": [ 00:18:58.522 "sha256", 00:18:58.522 "sha384", 00:18:58.522 "sha512" 00:18:58.522 ], 00:18:58.522 "dhchap_dhgroups": [ 00:18:58.522 "null", 00:18:58.522 "ffdhe2048", 00:18:58.522 "ffdhe3072", 00:18:58.522 "ffdhe4096", 00:18:58.522 "ffdhe6144", 00:18:58.522 "ffdhe8192" 00:18:58.522 ] 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_nvme_attach_controller", 00:18:58.522 "params": { 00:18:58.522 "name": "nvme0", 00:18:58.522 "trtype": "TCP", 00:18:58.522 "adrfam": "IPv4", 00:18:58.522 "traddr": "10.0.0.2", 00:18:58.522 "trsvcid": "4420", 00:18:58.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.522 "prchk_reftag": false, 00:18:58.522 "prchk_guard": false, 00:18:58.522 "ctrlr_loss_timeout_sec": 0, 00:18:58.522 "reconnect_delay_sec": 0, 00:18:58.522 "fast_io_fail_timeout_sec": 0, 00:18:58.522 "psk": "key0", 00:18:58.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.522 "hdgst": false, 00:18:58.522 "ddgst": false 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_nvme_set_hotplug", 00:18:58.522 "params": { 00:18:58.522 "period_us": 100000, 00:18:58.522 "enable": false 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_enable_histogram", 00:18:58.522 "params": { 00:18:58.522 "name": "nvme0n1", 00:18:58.522 "enable": true 00:18:58.522 } 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "method": "bdev_wait_for_examine" 00:18:58.522 } 00:18:58.522 ] 00:18:58.522 }, 00:18:58.522 { 00:18:58.522 "subsystem": "nbd", 00:18:58.522 "config": [] 00:18:58.522 } 00:18:58.522 ] 00:18:58.522 }' 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2810936 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2810936 ']' 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2810936 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810936 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810936' 00:18:58.522 killing process with pid 2810936 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2810936 00:18:58.522 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.522 00:18:58.522 Latency(us) 00:18:58.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.522 =================================================================================================================== 00:18:58.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.522 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2810936 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2810795 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2810795 ']' 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2810795 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810795 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810795' 00:18:58.780 killing process with pid 2810795 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2810795 00:18:58.780 18:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2810795 00:18:59.038 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:59.038 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.038 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:59.038 "subsystems": [ 00:18:59.038 { 00:18:59.038 "subsystem": "keyring", 00:18:59.038 "config": [ 00:18:59.038 { 00:18:59.038 "method": "keyring_file_add_key", 00:18:59.038 "params": { 00:18:59.038 "name": "key0", 00:18:59.038 "path": "/tmp/tmp.T1x8jMk37A" 00:18:59.038 } 00:18:59.038 } 00:18:59.038 ] 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "subsystem": "iobuf", 00:18:59.038 "config": [ 00:18:59.038 { 00:18:59.038 "method": "iobuf_set_options", 00:18:59.038 "params": { 00:18:59.038 "small_pool_count": 8192, 00:18:59.038 "large_pool_count": 1024, 00:18:59.038 "small_bufsize": 8192, 00:18:59.038 "large_bufsize": 135168 00:18:59.038 } 00:18:59.038 } 00:18:59.038 ] 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "subsystem": "sock", 00:18:59.038 "config": [ 00:18:59.038 { 00:18:59.038 "method": "sock_set_default_impl", 00:18:59.038 "params": { 00:18:59.038 "impl_name": "posix" 00:18:59.038 } 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "method": "sock_impl_set_options", 00:18:59.038 "params": { 00:18:59.038 "impl_name": "ssl", 00:18:59.038 "recv_buf_size": 4096, 00:18:59.038 "send_buf_size": 4096, 00:18:59.038 "enable_recv_pipe": true, 00:18:59.038 "enable_quickack": false, 00:18:59.038 "enable_placement_id": 0, 00:18:59.038 "enable_zerocopy_send_server": true, 00:18:59.038 "enable_zerocopy_send_client": false, 00:18:59.038 "zerocopy_threshold": 0, 00:18:59.038 "tls_version": 0, 00:18:59.038 "enable_ktls": false 00:18:59.038 } 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "method": "sock_impl_set_options", 00:18:59.038 "params": { 00:18:59.038 "impl_name": "posix", 00:18:59.038 "recv_buf_size": 2097152, 00:18:59.038 "send_buf_size": 2097152, 00:18:59.038 "enable_recv_pipe": true, 00:18:59.038 "enable_quickack": false, 00:18:59.038 "enable_placement_id": 0, 00:18:59.038 "enable_zerocopy_send_server": true, 00:18:59.038 "enable_zerocopy_send_client": false, 00:18:59.038 "zerocopy_threshold": 0, 00:18:59.038 "tls_version": 0, 00:18:59.038 "enable_ktls": false 00:18:59.038 } 00:18:59.038 } 00:18:59.038 ] 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "subsystem": "vmd", 00:18:59.038 "config": [] 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "subsystem": "accel", 00:18:59.038 "config": [ 00:18:59.038 { 00:18:59.038 "method": "accel_set_options", 00:18:59.038 "params": { 00:18:59.038 "small_cache_size": 128, 00:18:59.038 "large_cache_size": 16, 00:18:59.038 "task_count": 2048, 00:18:59.038 "sequence_count": 2048, 00:18:59.038 "buf_count": 2048 00:18:59.038 } 00:18:59.038 } 00:18:59.038 ] 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "subsystem": "bdev", 00:18:59.038 "config": [ 00:18:59.038 { 00:18:59.038 "method": "bdev_set_options", 00:18:59.038 "params": { 00:18:59.038 "bdev_io_pool_size": 65535, 00:18:59.038 "bdev_io_cache_size": 256, 00:18:59.038 "bdev_auto_examine": true, 00:18:59.038 "iobuf_small_cache_size": 128, 00:18:59.038 "iobuf_large_cache_size": 16 00:18:59.038 } 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "method": "bdev_raid_set_options", 00:18:59.038 "params": { 00:18:59.038 "process_window_size_kb": 1024, 00:18:59.038 "process_max_bandwidth_mb_sec": 0 00:18:59.038 } 00:18:59.038 }, 00:18:59.038 { 00:18:59.038 "method": "bdev_iscsi_set_options", 00:18:59.039 "params": { 00:18:59.039 "timeout_sec": 30 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "bdev_nvme_set_options", 00:18:59.039 "params": { 00:18:59.039 "action_on_timeout": "none", 00:18:59.039 "timeout_us": 0, 00:18:59.039 "timeout_admin_us": 0, 00:18:59.039 "keep_alive_timeout_ms": 10000, 00:18:59.039 "arbitration_burst": 0, 00:18:59.039 "low_priority_weight": 0, 00:18:59.039 "medium_priority_weight": 0, 00:18:59.039 "high_priority_weight": 0, 00:18:59.039 "nvme_adminq_poll_period_us": 10000, 00:18:59.039 "nvme_ioq_poll_period_us": 0, 00:18:59.039 "io_queue_requests": 0, 00:18:59.039 "delay_cmd_submit": true, 00:18:59.039 "transport_retry_count": 4, 00:18:59.039 "bdev_retry_count": 3, 00:18:59.039 "transport_ack_timeout": 0, 00:18:59.039 "ctrlr_loss_timeout_sec": 0, 00:18:59.039 "reconnect_delay_sec": 0, 00:18:59.039 "fast_io_fail_timeout_sec": 0, 00:18:59.039 "disable_auto_failback": false, 00:18:59.039 "generate_uuids": false, 00:18:59.039 "transport_tos": 0, 00:18:59.039 "nvme_error_stat": false, 00:18:59.039 "rdma_srq_size": 0, 00:18:59.039 "io_path_stat": false, 00:18:59.039 "allow_accel_sequence": false, 00:18:59.039 "rdma_max_cq_size": 0, 00:18:59.039 "rdma_cm_event_timeout_ms": 0, 00:18:59.039 "dhchap_digests": [ 00:18:59.039 "sha256", 00:18:59.039 "sha384", 00:18:59.039 "sha512" 00:18:59.039 ], 00:18:59.039 "dhchap_dhgroups": [ 00:18:59.039 "null", 00:18:59.039 "ffdhe2048", 00:18:59.039 "ffdhe3072", 00:18:59.039 "ffdhe4096", 00:18:59.039 "ffdhe6144", 00:18:59.039 "ffdhe8192" 00:18:59.039 ] 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "bdev_nvme_set_hotplug", 00:18:59.039 "params": { 00:18:59.039 "period_us": 100000, 00:18:59.039 "enable": false 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "bdev_malloc_create", 00:18:59.039 "params": { 00:18:59.039 "name": "malloc0", 00:18:59.039 "num_blocks": 8192, 00:18:59.039 "block_size": 4096, 00:18:59.039 "physical_block_size": 4096, 00:18:59.039 "uuid": "3a0f28e7-09ae-4a7f-ac9d-cf78113af3ec", 00:18:59.039 "optimal_io_boundary": 0, 00:18:59.039 "md_size": 0, 00:18:59.039 "dif_type": 0, 00:18:59.039 "dif_is_head_of_md": false, 00:18:59.039 "dif_pi_format": 0 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "bdev_wait_for_examine" 00:18:59.039 } 00:18:59.039 ] 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "subsystem": "nbd", 00:18:59.039 "config": [] 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "subsystem": "scheduler", 00:18:59.039 "config": [ 00:18:59.039 { 00:18:59.039 "method": "framework_set_scheduler", 00:18:59.039 "params": { 00:18:59.039 "name": "static" 00:18:59.039 } 00:18:59.039 } 00:18:59.039 ] 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "subsystem": "nvmf", 00:18:59.039 "config": [ 00:18:59.039 { 00:18:59.039 "method": "nvmf_set_config", 00:18:59.039 "params": { 00:18:59.039 "discovery_filter": "match_any", 00:18:59.039 "admin_cmd_passthru": { 00:18:59.039 "identify_ctrlr": false 00:18:59.039 } 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_set_max_subsystems", 00:18:59.039 "params": { 00:18:59.039 "max_subsystems": 1024 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_set_crdt", 00:18:59.039 "params": { 00:18:59.039 "crdt1": 0, 00:18:59.039 "crdt2": 0, 00:18:59.039 "crdt3": 0 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_create_transport", 00:18:59.039 "params": { 00:18:59.039 "trtype": "TCP", 00:18:59.039 "max_queue_depth": 128, 00:18:59.039 "max_io_qpairs_per_ctrlr": 127, 00:18:59.039 "in_capsule_data_size": 4096, 00:18:59.039 "max_io_size": 131072, 00:18:59.039 "io_unit_size": 131072, 00:18:59.039 "max_aq_depth": 128, 00:18:59.039 "num_shared_buffers": 511, 00:18:59.039 "buf_cache_size": 4294967295, 00:18:59.039 "dif_insert_or_strip": false, 00:18:59.039 "zcopy": false, 00:18:59.039 "c2h_success": false, 00:18:59.039 "sock_priority": 0, 00:18:59.039 "abort_timeout_sec": 1, 00:18:59.039 "ack_timeout": 0, 00:18:59.039 "data_wr_pool_size": 0 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_create_subsystem", 00:18:59.039 "params": { 00:18:59.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.039 "allow_any_host": false, 00:18:59.039 "serial_number": "00000000000000000000", 00:18:59.039 "model_number": "SPDK bdev Controller", 00:18:59.039 "max_namespaces": 32, 00:18:59.039 "min_cntlid": 1, 00:18:59.039 "max_cntlid": 65519, 00:18:59.039 "ana_reporting": false 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_subsystem_add_host", 00:18:59.039 "params": { 00:18:59.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.039 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.039 "psk": "key0" 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_subsystem_add_ns", 00:18:59.039 "params": { 00:18:59.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.039 "namespace": { 00:18:59.039 "nsid": 1, 00:18:59.039 "bdev_name": "malloc0", 00:18:59.039 "nguid": "3A0F28E709AE4A7FAC9DCF78113AF3EC", 00:18:59.039 "uuid": "3a0f28e7-09ae-4a7f-ac9d-cf78113af3ec", 00:18:59.039 "no_auto_visible": false 00:18:59.039 } 00:18:59.039 } 00:18:59.039 }, 00:18:59.039 { 00:18:59.039 "method": "nvmf_subsystem_add_listener", 00:18:59.039 "params": { 00:18:59.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.039 "listen_address": { 00:18:59.039 "trtype": "TCP", 00:18:59.039 "adrfam": "IPv4", 00:18:59.039 "traddr": "10.0.0.2", 00:18:59.039 "trsvcid": "4420" 00:18:59.039 }, 00:18:59.039 "secure_channel": false, 00:18:59.039 "sock_impl": "ssl" 00:18:59.039 } 00:18:59.039 } 00:18:59.039 ] 00:18:59.039 } 00:18:59.039 ] 00:18:59.039 }' 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2811235 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2811235 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2811235 ']' 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.039 18:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.039 [2024-07-24 18:01:45.258721] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:18:59.039 [2024-07-24 18:01:45.258816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.039 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.297 [2024-07-24 18:01:45.327720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.297 [2024-07-24 18:01:45.444768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.297 [2024-07-24 18:01:45.444830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.297 [2024-07-24 18:01:45.444858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.297 [2024-07-24 18:01:45.444871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.297 [2024-07-24 18:01:45.444883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.297 [2024-07-24 18:01:45.444966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.555 [2024-07-24 18:01:45.697389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.555 [2024-07-24 18:01:45.740894] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.555 [2024-07-24 18:01:45.741152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2811387 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2811387 /var/tmp/bdevperf.sock 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2811387 ']' 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.122 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:00.122 "subsystems": [ 00:19:00.122 { 00:19:00.122 "subsystem": "keyring", 00:19:00.122 "config": [ 00:19:00.122 { 00:19:00.122 "method": "keyring_file_add_key", 00:19:00.122 "params": { 00:19:00.122 "name": "key0", 00:19:00.122 "path": "/tmp/tmp.T1x8jMk37A" 00:19:00.122 } 00:19:00.122 } 00:19:00.122 ] 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "subsystem": "iobuf", 00:19:00.122 "config": [ 00:19:00.122 { 00:19:00.122 "method": "iobuf_set_options", 00:19:00.122 "params": { 00:19:00.122 "small_pool_count": 8192, 00:19:00.122 "large_pool_count": 1024, 00:19:00.122 "small_bufsize": 8192, 00:19:00.122 "large_bufsize": 135168 00:19:00.122 } 00:19:00.122 } 00:19:00.122 ] 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "subsystem": "sock", 00:19:00.122 "config": [ 00:19:00.122 { 00:19:00.122 "method": "sock_set_default_impl", 00:19:00.122 "params": { 00:19:00.122 "impl_name": "posix" 00:19:00.122 } 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "method": "sock_impl_set_options", 00:19:00.122 "params": { 00:19:00.122 "impl_name": "ssl", 00:19:00.122 "recv_buf_size": 4096, 00:19:00.122 "send_buf_size": 4096, 00:19:00.122 "enable_recv_pipe": true, 00:19:00.122 "enable_quickack": false, 00:19:00.122 "enable_placement_id": 0, 00:19:00.122 "enable_zerocopy_send_server": true, 00:19:00.122 "enable_zerocopy_send_client": false, 00:19:00.122 "zerocopy_threshold": 0, 00:19:00.122 "tls_version": 0, 00:19:00.122 "enable_ktls": false 00:19:00.122 } 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "method": "sock_impl_set_options", 00:19:00.122 "params": { 00:19:00.122 "impl_name": "posix", 00:19:00.122 "recv_buf_size": 2097152, 00:19:00.122 "send_buf_size": 2097152, 00:19:00.122 "enable_recv_pipe": true, 00:19:00.122 "enable_quickack": false, 00:19:00.122 "enable_placement_id": 0, 00:19:00.122 "enable_zerocopy_send_server": true, 00:19:00.122 "enable_zerocopy_send_client": false, 00:19:00.122 "zerocopy_threshold": 0, 00:19:00.122 "tls_version": 0, 00:19:00.122 "enable_ktls": false 00:19:00.122 } 00:19:00.122 } 00:19:00.122 ] 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "subsystem": "vmd", 00:19:00.122 "config": [] 00:19:00.122 }, 00:19:00.122 { 00:19:00.122 "subsystem": "accel", 00:19:00.122 "config": [ 00:19:00.122 { 00:19:00.122 "method": "accel_set_options", 00:19:00.122 "params": { 00:19:00.122 "small_cache_size": 128, 00:19:00.122 "large_cache_size": 16, 00:19:00.122 "task_count": 2048, 00:19:00.122 "sequence_count": 2048, 00:19:00.122 "buf_count": 2048 00:19:00.122 } 00:19:00.122 } 00:19:00.122 ] 00:19:00.122 }, 00:19:00.122 { 00:19:00.123 "subsystem": "bdev", 00:19:00.123 "config": [ 00:19:00.123 { 00:19:00.123 "method": "bdev_set_options", 00:19:00.123 "params": { 00:19:00.123 "bdev_io_pool_size": 65535, 00:19:00.123 "bdev_io_cache_size": 256, 00:19:00.123 "bdev_auto_examine": true, 00:19:00.123 "iobuf_small_cache_size": 128, 00:19:00.123 "iobuf_large_cache_size": 16 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_raid_set_options", 00:19:00.123 "params": { 00:19:00.123 "process_window_size_kb": 1024, 00:19:00.123 "process_max_bandwidth_mb_sec": 0 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_iscsi_set_options", 00:19:00.123 "params": { 00:19:00.123 "timeout_sec": 30 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_nvme_set_options", 00:19:00.123 "params": { 00:19:00.123 "action_on_timeout": "none", 00:19:00.123 "timeout_us": 0, 00:19:00.123 "timeout_admin_us": 0, 00:19:00.123 "keep_alive_timeout_ms": 10000, 00:19:00.123 "arbitration_burst": 0, 00:19:00.123 "low_priority_weight": 0, 00:19:00.123 "medium_priority_weight": 0, 00:19:00.123 "high_priority_weight": 0, 00:19:00.123 "nvme_adminq_poll_period_us": 10000, 00:19:00.123 "nvme_ioq_poll_period_us": 0, 00:19:00.123 "io_queue_requests": 512, 00:19:00.123 "delay_cmd_submit": true, 00:19:00.123 "transport_retry_count": 4, 00:19:00.123 "bdev_retry_count": 3, 00:19:00.123 "transport_ack_timeout": 0, 00:19:00.123 "ctrlr_loss_timeout_sec": 0, 00:19:00.123 "reconnect_delay_sec": 0, 00:19:00.123 "fast_io_fail_timeout_sec": 0, 00:19:00.123 "disable_auto_failback": false, 00:19:00.123 "generate_uuids": false, 00:19:00.123 "transport_tos": 0, 00:19:00.123 "nvme_error_stat": false, 00:19:00.123 "rdma_srq_size": 0, 00:19:00.123 "io_path_stat": false, 00:19:00.123 "allow_accel_sequence": false, 00:19:00.123 "rdma_max_cq_size": 0, 00:19:00.123 "rdma_cm_event_timeout_ms": 0, 00:19:00.123 "dhchap_digests": [ 00:19:00.123 "sha256", 00:19:00.123 "sha384", 00:19:00.123 "sha512" 00:19:00.123 ], 00:19:00.123 "dhchap_dhgroups": [ 00:19:00.123 "null", 00:19:00.123 "ffdhe2048", 00:19:00.123 "ffdhe3072", 00:19:00.123 "ffdhe4096", 00:19:00.123 "ffdhe6144", 00:19:00.123 "ffdhe8192" 00:19:00.123 ] 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_nvme_attach_controller", 00:19:00.123 "params": { 00:19:00.123 "name": "nvme0", 00:19:00.123 "trtype": "TCP", 00:19:00.123 "adrfam": "IPv4", 00:19:00.123 "traddr": "10.0.0.2", 00:19:00.123 "trsvcid": "4420", 00:19:00.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.123 "prchk_reftag": false, 00:19:00.123 "prchk_guard": false, 00:19:00.123 "ctrlr_loss_timeout_sec": 0, 00:19:00.123 "reconnect_delay_sec": 0, 00:19:00.123 "fast_io_fail_timeout_sec": 0, 00:19:00.123 "psk": "key0", 00:19:00.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:00.123 "hdgst": false, 00:19:00.123 "ddgst": false 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_nvme_set_hotplug", 00:19:00.123 "params": { 00:19:00.123 "period_us": 100000, 00:19:00.123 "enable": false 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_enable_histogram", 00:19:00.123 "params": { 00:19:00.123 "name": "nvme0n1", 00:19:00.123 "enable": true 00:19:00.123 } 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "method": "bdev_wait_for_examine" 00:19:00.123 } 00:19:00.123 ] 00:19:00.123 }, 00:19:00.123 { 00:19:00.123 "subsystem": "nbd", 00:19:00.123 "config": [] 00:19:00.123 } 00:19:00.123 ] 00:19:00.123 }' 00:19:00.123 [2024-07-24 18:01:46.286822] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:19:00.123 [2024-07-24 18:01:46.286911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811387 ] 00:19:00.123 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.123 [2024-07-24 18:01:46.359620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.381 [2024-07-24 18:01:46.497236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.639 [2024-07-24 18:01:46.667685] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.639 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.639 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:00.639 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.639 18:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:00.896 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.896 18:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.896 Running I/O for 1 seconds... 00:19:02.268 00:19:02.268 Latency(us) 00:19:02.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.268 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.268 Verification LBA range: start 0x0 length 0x2000 00:19:02.268 nvme0n1 : 1.04 2800.60 10.94 0.00 0.00 44853.46 5995.33 62914.56 00:19:02.268 =================================================================================================================== 00:19:02.268 Total : 2800.60 10.94 0.00 0.00 44853.46 5995.33 62914.56 00:19:02.268 0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.268 nvmf_trace.0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2811387 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2811387 ']' 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2811387 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2811387 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2811387' 00:19:02.268 killing process with pid 2811387 00:19:02.268 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2811387 00:19:02.268 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.268 00:19:02.268 Latency(us) 00:19:02.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.268 =================================================================================================================== 00:19:02.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.269 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2811387 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:02.527 rmmod nvme_tcp 00:19:02.527 rmmod nvme_fabrics 00:19:02.527 rmmod nvme_keyring 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2811235 ']' 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2811235 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2811235 ']' 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2811235 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2811235 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2811235' 00:19:02.527 killing process with pid 2811235 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2811235 00:19:02.527 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2811235 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.786 18:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.318 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.318 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZrJATc7J1s /tmp/tmp.a6RSxrz8K8 /tmp/tmp.T1x8jMk37A 00:19:05.318 00:19:05.318 real 1m20.047s 00:19:05.318 user 2m8.321s 00:19:05.318 sys 0m27.016s 00:19:05.318 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.318 18:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.318 ************************************ 00:19:05.318 END TEST nvmf_tls 00:19:05.318 ************************************ 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.318 ************************************ 00:19:05.318 START TEST nvmf_fips 00:19:05.318 ************************************ 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:05.318 * Looking for test storage... 00:19:05.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:05.318 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:05.319 Error setting digest 00:19:05.319 0072FBD04D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:05.319 0072FBD04D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.319 18:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:07.226 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:07.226 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:07.226 Found net devices under 0000:09:00.0: cvl_0_0 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:07.226 Found net devices under 0000:09:00.1: cvl_0_1 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.226 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:07.227 00:19:07.227 --- 10.0.0.2 ping statistics --- 00:19:07.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.227 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:19:07.227 00:19:07.227 --- 10.0.0.1 ping statistics --- 00:19:07.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.227 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2813640 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2813640 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2813640 ']' 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.227 18:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 [2024-07-24 18:01:53.337770] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:19:07.227 [2024-07-24 18:01:53.337839] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.227 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.227 [2024-07-24 18:01:53.403992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.485 [2024-07-24 18:01:53.521952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.485 [2024-07-24 18:01:53.522006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.485 [2024-07-24 18:01:53.522023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.485 [2024-07-24 18:01:53.522037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.485 [2024-07-24 18:01:53.522049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.485 [2024-07-24 18:01:53.522079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.051 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.309 [2024-07-24 18:01:54.566486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.567 [2024-07-24 18:01:54.582481] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.567 [2024-07-24 18:01:54.582693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.567 [2024-07-24 18:01:54.613957] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.567 malloc0 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2813892 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2813892 /var/tmp/bdevperf.sock 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2813892 ']' 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.567 18:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.567 [2024-07-24 18:01:54.704281] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:19:08.567 [2024-07-24 18:01:54.704353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813892 ] 00:19:08.567 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.567 [2024-07-24 18:01:54.759539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.826 [2024-07-24 18:01:54.864633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.391 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.391 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:09.391 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.649 [2024-07-24 18:01:55.855028] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.649 [2024-07-24 18:01:55.855167] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:09.907 TLSTESTn1 00:19:09.907 18:01:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.907 Running I/O for 10 seconds... 00:19:22.104 00:19:22.104 Latency(us) 00:19:22.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.104 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.104 Verification LBA range: start 0x0 length 0x2000 00:19:22.104 TLSTESTn1 : 10.04 3059.26 11.95 0.00 0.00 41731.70 8349.77 55147.33 00:19:22.104 =================================================================================================================== 00:19:22.104 Total : 3059.26 11.95 0.00 0.00 41731.70 8349.77 55147.33 00:19:22.104 0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:22.104 nvmf_trace.0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2813892 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2813892 ']' 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2813892 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813892 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813892' 00:19:22.104 killing process with pid 2813892 00:19:22.104 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2813892 00:19:22.104 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.104 00:19:22.104 Latency(us) 00:19:22.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.105 =================================================================================================================== 00:19:22.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.105 [2024-07-24 18:02:06.269373] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2813892 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.105 rmmod nvme_tcp 00:19:22.105 rmmod nvme_fabrics 00:19:22.105 rmmod nvme_keyring 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2813640 ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2813640 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2813640 ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2813640 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813640 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813640' 00:19:22.105 killing process with pid 2813640 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2813640 00:19:22.105 [2024-07-24 18:02:06.621284] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2813640 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.105 18:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.673 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.673 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:22.673 00:19:22.673 real 0m17.908s 00:19:22.673 user 0m20.692s 00:19:22.673 sys 0m6.630s 00:19:22.673 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.673 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:22.673 ************************************ 00:19:22.673 END TEST nvmf_fips 00:19:22.673 ************************************ 00:19:22.932 18:02:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:22.932 18:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.932 18:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.932 18:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.932 ************************************ 00:19:22.932 START TEST nvmf_control_msg_list 00:19:22.932 ************************************ 00:19:22.932 18:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:22.932 * Looking for test storage... 00:19:22.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.932 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.932 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:22.932 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # : 0 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.933 18:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # e810=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # x722=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # mlx=() 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.835 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:24.836 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:24.836 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:24.836 Found net devices under 0000:09:00.0: cvl_0_0 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:24.836 Found net devices under 0000:09:00.1: cvl_0_1 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.836 18:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:19:24.836 00:19:24.836 --- 10.0.0.2 ping statistics --- 00:19:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.836 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:24.836 00:19:24.836 --- 10.0.0.1 ping statistics --- 00:19:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.836 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # return 0 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.836 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # nvmfpid=2817153 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # waitforlisten 2817153 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@829 -- # '[' -z 2817153 ']' 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.139 18:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:25.139 [2024-07-24 18:02:11.172738] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:19:25.139 [2024-07-24 18:02:11.172842] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.139 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.139 [2024-07-24 18:02:11.238498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.139 [2024-07-24 18:02:11.357625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.139 [2024-07-24 18:02:11.357682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.140 [2024-07-24 18:02:11.357706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.140 [2024-07-24 18:02:11.357720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.140 [2024-07-24 18:02:11.357732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.140 [2024-07-24 18:02:11.357762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.074 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.074 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # return 0 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 [2024-07-24 18:02:12.163739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 Malloc0 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.075 [2024-07-24 18:02:12.213260] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2817307 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2817308 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2817309 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2817307 00:19:26.075 18:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.075 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.075 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.075 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.075 [2024-07-24 18:02:12.316284] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.075 [2024-07-24 18:02:12.316402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.316991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.075 [2024-07-24 18:02:12.317175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.317991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.076 [2024-07-24 18:02:12.318629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.318995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.319987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.320005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.320024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.077 [2024-07-24 18:02:12.320043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.320062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.320095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332190] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.078 [2024-07-24 18:02:12.332310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.332987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.078 [2024-07-24 18:02:12.333788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.333999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.334981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.079 [2024-07-24 18:02:12.335303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.080 [2024-07-24 18:02:12.335585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.335990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.336009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.336030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.336049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.081 [2024-07-24 18:02:12.336067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.348113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66630 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.348298] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.341 [2024-07-24 18:02:12.556274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.556986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.341 [2024-07-24 18:02:12.557426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.557998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.558983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.559002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.559021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.559040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.342 [2024-07-24 18:02:12.559062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.559998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.560017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.560036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.560055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:26.343 [2024-07-24 18:02:12.560074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc66e20 is same with the state(4) to be set 00:19:27.715 Initializing NVMe Controllers 00:19:27.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.715 Initialization complete. Launching workers. 00:19:27.715 ======================================================== 00:19:27.715 Latency(us) 00:19:27.715 Device Information : IOPS MiB/s Average min max 00:19:27.715 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 24.99 0.10 40021.35 12348.53 63851.48 00:19:27.715 ======================================================== 00:19:27.715 Total : 24.99 0.10 40021.35 12348.53 63851.48 00:19:27.715 00:19:27.715 Initializing NVMe Controllers 00:19:27.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.715 Initialization complete. Launching workers. 00:19:27.715 ======================================================== 00:19:27.715 Latency(us) 00:19:27.715 Device Information : IOPS MiB/s Average min max 00:19:27.715 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 27.82 0.11 35948.10 14960.90 63848.79 00:19:27.715 ======================================================== 00:19:27.715 Total : 27.82 0.11 35948.10 14960.90 63848.79 00:19:27.715 00:19:27.715 Initializing NVMe Controllers 00:19:27.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.715 Initialization complete. Launching workers. 00:19:27.715 ======================================================== 00:19:27.715 Latency(us) 00:19:27.715 Device Information : IOPS MiB/s Average min max 00:19:27.715 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 65.00 0.25 15469.44 7317.99 17946.06 00:19:27.715 ======================================================== 00:19:27.715 Total : 65.00 0.25 15469.44 7317.99 17946.06 00:19:27.715 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2817308 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2817309 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # nvmftestfini 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@117 -- # sync 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@120 -- # set +e 00:19:27.973 18:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.973 rmmod nvme_tcp 00:19:27.973 rmmod nvme_fabrics 00:19:27.973 rmmod nvme_keyring 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set -e 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # return 0 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # '[' -n 2817153 ']' 00:19:27.973 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # killprocess 2817153 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@948 -- # '[' -z 2817153 ']' 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # kill -0 2817153 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@953 -- # uname 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817153 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817153' 00:19:27.974 killing process with pid 2817153 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@967 -- # kill 2817153 00:19:27.974 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # wait 2817153 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.233 18:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # process_shm --id 0 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@806 -- # type=--id 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@807 -- # id=0 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:30.135 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:30.394 nvmf_trace.0 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@821 -- # return 0 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # nvmftestfini 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@117 -- # sync 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@120 -- # set +e 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.394 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set -e 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # return 0 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # '[' -n 2817153 ']' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # killprocess 2817153 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@948 -- # '[' -z 2817153 ']' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # kill -0 2817153 00:19:30.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2817153) - No such process 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@975 -- # echo 'Process with pid 2817153 is not found' 00:19:30.395 Process with pid 2817153 is not found 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:30.395 00:19:30.395 real 0m7.488s 00:19:30.395 user 0m4.034s 00:19:30.395 sys 0m2.144s 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.395 ************************************ 00:19:30.395 END TEST nvmf_control_msg_list 00:19:30.395 ************************************ 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # '[' 0 -eq 1 ']' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # [[ phy == phy ]] 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # '[' tcp = tcp ']' 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # gather_supported_nvmf_pci_devs 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.395 18:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:32.297 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:32.297 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.297 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:32.298 Found net devices under 0000:09:00.0: cvl_0_0 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:32.298 Found net devices under 0000:09:00.1: cvl_0_1 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # (( 2 > 0 )) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.298 ************************************ 00:19:32.298 START TEST nvmf_perf_adq 00:19:32.298 ************************************ 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:32.298 * Looking for test storage... 00:19:32.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.298 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.557 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:32.558 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.558 18:02:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:34.460 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:34.460 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:34.460 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:34.461 Found net devices under 0000:09:00.0: cvl_0_0 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:34.461 Found net devices under 0000:09:00.1: cvl_0_1 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:34.461 18:02:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:35.028 18:02:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:36.928 18:02:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.204 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:42.205 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:42.205 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:42.205 Found net devices under 0000:09:00.0: cvl_0_0 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:42.205 Found net devices under 0000:09:00.1: cvl_0_1 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:42.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:19:42.205 00:19:42.205 --- 10.0.0.2 ping statistics --- 00:19:42.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.205 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:42.205 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:42.206 00:19:42.206 --- 10.0.0.1 ping statistics --- 00:19:42.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.206 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2822004 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2822004 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2822004 ']' 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.206 18:02:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.206 [2024-07-24 18:02:28.340260] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:19:42.206 [2024-07-24 18:02:28.340355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.206 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.206 [2024-07-24 18:02:28.408165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.465 [2024-07-24 18:02:28.528255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.465 [2024-07-24 18:02:28.528313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.465 [2024-07-24 18:02:28.528338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.465 [2024-07-24 18:02:28.528352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.465 [2024-07-24 18:02:28.528365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.465 [2024-07-24 18:02:28.528459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.465 [2024-07-24 18:02:28.528514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.465 [2024-07-24 18:02:28.528626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.465 [2024-07-24 18:02:28.528628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.398 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 [2024-07-24 18:02:29.499921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 Malloc1 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.399 [2024-07-24 18:02:29.550890] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2822169 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:43.399 18:02:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:43.399 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.299 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:45.299 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.299 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:45.557 "tick_rate": 2700000000, 00:19:45.557 "poll_groups": [ 00:19:45.557 { 00:19:45.557 "name": "nvmf_tgt_poll_group_000", 00:19:45.557 "admin_qpairs": 1, 00:19:45.557 "io_qpairs": 1, 00:19:45.557 "current_admin_qpairs": 1, 00:19:45.557 "current_io_qpairs": 1, 00:19:45.557 "pending_bdev_io": 0, 00:19:45.557 "completed_nvme_io": 20242, 00:19:45.557 "transports": [ 00:19:45.557 { 00:19:45.557 "trtype": "TCP" 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": "nvmf_tgt_poll_group_001", 00:19:45.557 "admin_qpairs": 0, 00:19:45.557 "io_qpairs": 1, 00:19:45.557 "current_admin_qpairs": 0, 00:19:45.557 "current_io_qpairs": 1, 00:19:45.557 "pending_bdev_io": 0, 00:19:45.557 "completed_nvme_io": 19666, 00:19:45.557 "transports": [ 00:19:45.557 { 00:19:45.557 "trtype": "TCP" 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": "nvmf_tgt_poll_group_002", 00:19:45.557 "admin_qpairs": 0, 00:19:45.557 "io_qpairs": 1, 00:19:45.557 "current_admin_qpairs": 0, 00:19:45.557 "current_io_qpairs": 1, 00:19:45.557 "pending_bdev_io": 0, 00:19:45.557 "completed_nvme_io": 19636, 00:19:45.557 "transports": [ 00:19:45.557 { 00:19:45.557 "trtype": "TCP" 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 }, 00:19:45.557 { 00:19:45.557 "name": "nvmf_tgt_poll_group_003", 00:19:45.557 "admin_qpairs": 0, 00:19:45.557 "io_qpairs": 1, 00:19:45.557 "current_admin_qpairs": 0, 00:19:45.557 "current_io_qpairs": 1, 00:19:45.557 "pending_bdev_io": 0, 00:19:45.557 "completed_nvme_io": 20651, 00:19:45.557 "transports": [ 00:19:45.557 { 00:19:45.557 "trtype": "TCP" 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 } 00:19:45.557 ] 00:19:45.557 }' 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:45.557 18:02:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2822169 00:19:53.701 Initializing NVMe Controllers 00:19:53.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:53.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:53.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:53.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:53.701 Initialization complete. Launching workers. 00:19:53.701 ======================================================== 00:19:53.701 Latency(us) 00:19:53.701 Device Information : IOPS MiB/s Average min max 00:19:53.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10302.50 40.24 6212.64 2482.51 10207.38 00:19:53.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10402.60 40.64 6154.15 3329.91 10191.92 00:19:53.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10776.10 42.09 5939.07 2196.11 8862.77 00:19:53.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10642.30 41.57 6013.26 1847.37 7836.76 00:19:53.701 ======================================================== 00:19:53.701 Total : 42123.49 164.54 6077.84 1847.37 10207.38 00:19:53.701 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.701 rmmod nvme_tcp 00:19:53.701 rmmod nvme_fabrics 00:19:53.701 rmmod nvme_keyring 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2822004 ']' 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2822004 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2822004 ']' 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2822004 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2822004 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.701 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2822004' 00:19:53.701 killing process with pid 2822004 00:19:53.702 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2822004 00:19:53.702 18:02:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2822004 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.960 18:02:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.861 18:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.861 18:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:55.861 18:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:56.795 18:02:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:58.709 18:02:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:03.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:03.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.978 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:03.979 Found net devices under 0000:09:00.0: cvl_0_0 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:03.979 Found net devices under 0000:09:00.1: cvl_0_1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:20:03.979 00:20:03.979 --- 10.0.0.2 ping statistics --- 00:20:03.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.979 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:03.979 00:20:03.979 --- 10.0.0.1 ping statistics --- 00:20:03.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.979 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:03.979 net.core.busy_poll = 1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:03.979 net.core.busy_read = 1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:03.979 18:02:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2824785 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2824785 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2824785 ']' 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.979 18:02:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.979 [2024-07-24 18:02:50.071187] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:03.979 [2024-07-24 18:02:50.071283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.979 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.979 [2024-07-24 18:02:50.142078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.237 [2024-07-24 18:02:50.262968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.237 [2024-07-24 18:02:50.263013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.237 [2024-07-24 18:02:50.263034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.237 [2024-07-24 18:02:50.263045] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.237 [2024-07-24 18:02:50.263055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.237 [2024-07-24 18:02:50.263194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.237 [2024-07-24 18:02:50.263218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.237 [2024-07-24 18:02:50.263240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.237 [2024-07-24 18:02:50.263242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.802 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 [2024-07-24 18:02:51.191072] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 Malloc1 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.060 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.061 [2024-07-24 18:02:51.243627] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.061 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.061 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2825022 00:20:05.061 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:05.061 18:02:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:05.061 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.591 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:07.591 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.591 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.591 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.591 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:07.591 "tick_rate": 2700000000, 00:20:07.591 "poll_groups": [ 00:20:07.591 { 00:20:07.592 "name": "nvmf_tgt_poll_group_000", 00:20:07.592 "admin_qpairs": 1, 00:20:07.592 "io_qpairs": 1, 00:20:07.592 "current_admin_qpairs": 1, 00:20:07.592 "current_io_qpairs": 1, 00:20:07.592 "pending_bdev_io": 0, 00:20:07.592 "completed_nvme_io": 25328, 00:20:07.592 "transports": [ 00:20:07.592 { 00:20:07.592 "trtype": "TCP" 00:20:07.592 } 00:20:07.592 ] 00:20:07.592 }, 00:20:07.592 { 00:20:07.592 "name": "nvmf_tgt_poll_group_001", 00:20:07.592 "admin_qpairs": 0, 00:20:07.592 "io_qpairs": 3, 00:20:07.592 "current_admin_qpairs": 0, 00:20:07.592 "current_io_qpairs": 3, 00:20:07.592 "pending_bdev_io": 0, 00:20:07.592 "completed_nvme_io": 26713, 00:20:07.592 "transports": [ 00:20:07.592 { 00:20:07.592 "trtype": "TCP" 00:20:07.592 } 00:20:07.592 ] 00:20:07.592 }, 00:20:07.592 { 00:20:07.592 "name": "nvmf_tgt_poll_group_002", 00:20:07.592 "admin_qpairs": 0, 00:20:07.592 "io_qpairs": 0, 00:20:07.592 "current_admin_qpairs": 0, 00:20:07.592 "current_io_qpairs": 0, 00:20:07.592 "pending_bdev_io": 0, 00:20:07.592 "completed_nvme_io": 0, 00:20:07.592 "transports": [ 00:20:07.592 { 00:20:07.592 "trtype": "TCP" 00:20:07.592 } 00:20:07.592 ] 00:20:07.592 }, 00:20:07.592 { 00:20:07.592 "name": "nvmf_tgt_poll_group_003", 00:20:07.592 "admin_qpairs": 0, 00:20:07.592 "io_qpairs": 0, 00:20:07.592 "current_admin_qpairs": 0, 00:20:07.592 "current_io_qpairs": 0, 00:20:07.592 "pending_bdev_io": 0, 00:20:07.592 "completed_nvme_io": 0, 00:20:07.592 "transports": [ 00:20:07.592 { 00:20:07.592 "trtype": "TCP" 00:20:07.592 } 00:20:07.592 ] 00:20:07.592 } 00:20:07.592 ] 00:20:07.592 }' 00:20:07.592 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:07.592 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:07.592 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:07.592 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:07.592 18:02:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2825022 00:20:15.702 Initializing NVMe Controllers 00:20:15.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:15.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:15.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:15.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:15.702 Initialization complete. Launching workers. 00:20:15.702 ======================================================== 00:20:15.702 Latency(us) 00:20:15.702 Device Information : IOPS MiB/s Average min max 00:20:15.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4358.85 17.03 14685.16 2443.61 61115.61 00:20:15.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13371.14 52.23 4795.91 1319.28 44583.23 00:20:15.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4751.24 18.56 13497.38 1922.72 62457.07 00:20:15.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4888.44 19.10 13097.94 1901.71 62936.07 00:20:15.702 ======================================================== 00:20:15.702 Total : 27369.66 106.91 9364.20 1319.28 62936.07 00:20:15.702 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.702 rmmod nvme_tcp 00:20:15.702 rmmod nvme_fabrics 00:20:15.702 rmmod nvme_keyring 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2824785 ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2824785 ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2824785' 00:20:15.702 killing process with pid 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2824785 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.702 18:03:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.606 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.606 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:17.606 00:20:17.606 real 0m45.361s 00:20:17.606 user 2m42.420s 00:20:17.606 sys 0m11.034s 00:20:17.606 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.606 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.606 ************************************ 00:20:17.606 END TEST nvmf_perf_adq 00:20:17.606 ************************************ 00:20:17.865 18:03:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@64 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:17.865 18:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.865 18:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.865 18:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.865 ************************************ 00:20:17.865 START TEST nvmf_shutdown 00:20:17.865 ************************************ 00:20:17.865 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:17.865 * Looking for test storage... 00:20:17.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.866 18:03:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.866 ************************************ 00:20:17.866 START TEST nvmf_shutdown_tc1 00:20:17.866 ************************************ 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.866 18:03:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.396 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.396 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.396 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:20.397 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:20.397 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:20.397 Found net devices under 0000:09:00.0: cvl_0_0 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:20.397 Found net devices under 0000:09:00.1: cvl_0_1 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.397 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:20:20.398 00:20:20.398 --- 10.0.0.2 ping statistics --- 00:20:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.398 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:20:20.398 00:20:20.398 --- 10.0.0.1 ping statistics --- 00:20:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.398 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2828323 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2828323 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2828323 ']' 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.398 18:03:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.398 [2024-07-24 18:03:06.287586] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:20.398 [2024-07-24 18:03:06.287674] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.398 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.398 [2024-07-24 18:03:06.353198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.398 [2024-07-24 18:03:06.464330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.398 [2024-07-24 18:03:06.464390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.398 [2024-07-24 18:03:06.464415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.398 [2024-07-24 18:03:06.464427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.398 [2024-07-24 18:03:06.464436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.398 [2024-07-24 18:03:06.464526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.398 [2024-07-24 18:03:06.464590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.398 [2024-07-24 18:03:06.464639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.398 [2024-07-24 18:03:06.464642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.331 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.331 [2024-07-24 18:03:07.260391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.332 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 Malloc1 00:20:21.332 [2024-07-24 18:03:07.340674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.332 Malloc2 00:20:21.332 Malloc3 00:20:21.332 Malloc4 00:20:21.332 Malloc5 00:20:21.332 Malloc6 00:20:21.592 Malloc7 00:20:21.592 Malloc8 00:20:21.592 Malloc9 00:20:21.592 Malloc10 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2828764 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2828764 /var/tmp/bdevperf.sock 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2828764 ']' 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.592 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.592 { 00:20:21.592 "params": { 00:20:21.592 "name": "Nvme$subsystem", 00:20:21.592 "trtype": "$TEST_TRANSPORT", 00:20:21.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.592 "adrfam": "ipv4", 00:20:21.592 "trsvcid": "$NVMF_PORT", 00:20:21.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.592 "hdgst": ${hdgst:-false}, 00:20:21.592 "ddgst": ${ddgst:-false} 00:20:21.592 }, 00:20:21.592 "method": "bdev_nvme_attach_controller" 00:20:21.592 } 00:20:21.592 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:21.593 { 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme$subsystem", 00:20:21.593 "trtype": "$TEST_TRANSPORT", 00:20:21.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "$NVMF_PORT", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:21.593 "hdgst": ${hdgst:-false}, 00:20:21.593 "ddgst": ${ddgst:-false} 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 } 00:20:21.593 EOF 00:20:21.593 )") 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:21.593 18:03:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme1", 00:20:21.593 "trtype": "tcp", 00:20:21.593 "traddr": "10.0.0.2", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "4420", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.593 "hdgst": false, 00:20:21.593 "ddgst": false 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 },{ 00:20:21.593 "params": { 00:20:21.593 "name": "Nvme2", 00:20:21.593 "trtype": "tcp", 00:20:21.593 "traddr": "10.0.0.2", 00:20:21.593 "adrfam": "ipv4", 00:20:21.593 "trsvcid": "4420", 00:20:21.593 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.593 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:21.593 "hdgst": false, 00:20:21.593 "ddgst": false 00:20:21.593 }, 00:20:21.593 "method": "bdev_nvme_attach_controller" 00:20:21.593 },{ 00:20:21.593 "params": { 00:20:21.594 "name": "Nvme3", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme4", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme5", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme6", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme7", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme8", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme9", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 },{ 00:20:21.594 "params": { 00:20:21.594 "name": "Nvme10", 00:20:21.594 "trtype": "tcp", 00:20:21.594 "traddr": "10.0.0.2", 00:20:21.594 "adrfam": "ipv4", 00:20:21.594 "trsvcid": "4420", 00:20:21.594 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:21.594 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:21.594 "hdgst": false, 00:20:21.594 "ddgst": false 00:20:21.594 }, 00:20:21.594 "method": "bdev_nvme_attach_controller" 00:20:21.594 }' 00:20:21.594 [2024-07-24 18:03:07.859161] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:21.594 [2024-07-24 18:03:07.859243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:21.892 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.892 [2024-07-24 18:03:07.923099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.892 [2024-07-24 18:03:08.033024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2828764 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:23.787 18:03:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:24.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2828764 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2828323 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.723 { 00:20:24.723 "params": { 00:20:24.723 "name": "Nvme$subsystem", 00:20:24.723 "trtype": "$TEST_TRANSPORT", 00:20:24.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.723 "adrfam": "ipv4", 00:20:24.723 "trsvcid": "$NVMF_PORT", 00:20:24.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.723 "hdgst": ${hdgst:-false}, 00:20:24.723 "ddgst": ${ddgst:-false} 00:20:24.723 }, 00:20:24.723 "method": "bdev_nvme_attach_controller" 00:20:24.723 } 00:20:24.723 EOF 00:20:24.723 )") 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.723 { 00:20:24.723 "params": { 00:20:24.723 "name": "Nvme$subsystem", 00:20:24.723 "trtype": "$TEST_TRANSPORT", 00:20:24.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.723 "adrfam": "ipv4", 00:20:24.723 "trsvcid": "$NVMF_PORT", 00:20:24.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.723 "hdgst": ${hdgst:-false}, 00:20:24.723 "ddgst": ${ddgst:-false} 00:20:24.723 }, 00:20:24.723 "method": "bdev_nvme_attach_controller" 00:20:24.723 } 00:20:24.723 EOF 00:20:24.723 )") 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.723 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.724 { 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme$subsystem", 00:20:24.724 "trtype": "$TEST_TRANSPORT", 00:20:24.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "$NVMF_PORT", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.724 "hdgst": ${hdgst:-false}, 00:20:24.724 "ddgst": ${ddgst:-false} 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 } 00:20:24.724 EOF 00:20:24.724 )") 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:24.724 18:03:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme1", 00:20:24.724 "trtype": "tcp", 00:20:24.724 "traddr": "10.0.0.2", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "4420", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.724 "hdgst": false, 00:20:24.724 "ddgst": false 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 },{ 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme2", 00:20:24.724 "trtype": "tcp", 00:20:24.724 "traddr": "10.0.0.2", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "4420", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:24.724 "hdgst": false, 00:20:24.724 "ddgst": false 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 },{ 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme3", 00:20:24.724 "trtype": "tcp", 00:20:24.724 "traddr": "10.0.0.2", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "4420", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:24.724 "hdgst": false, 00:20:24.724 "ddgst": false 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 },{ 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme4", 00:20:24.724 "trtype": "tcp", 00:20:24.724 "traddr": "10.0.0.2", 00:20:24.724 "adrfam": "ipv4", 00:20:24.724 "trsvcid": "4420", 00:20:24.724 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:24.724 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:24.724 "hdgst": false, 00:20:24.724 "ddgst": false 00:20:24.724 }, 00:20:24.724 "method": "bdev_nvme_attach_controller" 00:20:24.724 },{ 00:20:24.724 "params": { 00:20:24.724 "name": "Nvme5", 00:20:24.724 "trtype": "tcp", 00:20:24.724 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 },{ 00:20:24.725 "params": { 00:20:24.725 "name": "Nvme6", 00:20:24.725 "trtype": "tcp", 00:20:24.725 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 },{ 00:20:24.725 "params": { 00:20:24.725 "name": "Nvme7", 00:20:24.725 "trtype": "tcp", 00:20:24.725 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 },{ 00:20:24.725 "params": { 00:20:24.725 "name": "Nvme8", 00:20:24.725 "trtype": "tcp", 00:20:24.725 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 },{ 00:20:24.725 "params": { 00:20:24.725 "name": "Nvme9", 00:20:24.725 "trtype": "tcp", 00:20:24.725 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 },{ 00:20:24.725 "params": { 00:20:24.725 "name": "Nvme10", 00:20:24.725 "trtype": "tcp", 00:20:24.725 "traddr": "10.0.0.2", 00:20:24.725 "adrfam": "ipv4", 00:20:24.725 "trsvcid": "4420", 00:20:24.725 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:24.725 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:24.725 "hdgst": false, 00:20:24.725 "ddgst": false 00:20:24.725 }, 00:20:24.725 "method": "bdev_nvme_attach_controller" 00:20:24.725 }' 00:20:24.725 [2024-07-24 18:03:10.878679] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:24.725 [2024-07-24 18:03:10.878768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829440 ] 00:20:24.725 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.725 [2024-07-24 18:03:10.942068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.983 [2024-07-24 18:03:11.054894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.354 Running I/O for 1 seconds... 00:20:27.289 00:20:27.289 Latency(us) 00:20:27.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.289 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme1n1 : 1.09 176.77 11.05 0.00 0.00 358528.95 21554.06 288940.94 00:20:27.289 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme2n1 : 1.14 227.47 14.22 0.00 0.00 272160.18 7961.41 253211.69 00:20:27.289 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme3n1 : 1.12 228.49 14.28 0.00 0.00 268197.55 19223.89 248551.35 00:20:27.289 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme4n1 : 1.13 227.50 14.22 0.00 0.00 264844.52 17767.54 271853.04 00:20:27.289 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme5n1 : 1.16 219.87 13.74 0.00 0.00 269893.59 22427.88 276513.37 00:20:27.289 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme6n1 : 1.16 221.30 13.83 0.00 0.00 263376.40 19418.07 282727.16 00:20:27.289 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme7n1 : 1.14 225.05 14.07 0.00 0.00 254095.93 16408.27 273406.48 00:20:27.289 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme8n1 : 1.17 219.09 13.69 0.00 0.00 257437.39 22913.33 285834.05 00:20:27.289 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme9n1 : 1.16 220.56 13.79 0.00 0.00 251006.10 14757.74 318456.41 00:20:27.289 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.289 Verification LBA range: start 0x0 length 0x400 00:20:27.289 Nvme10n1 : 1.17 218.60 13.66 0.00 0.00 249195.90 23204.60 287387.50 00:20:27.289 =================================================================================================================== 00:20:27.289 Total : 2184.70 136.54 0.00 0.00 268631.73 7961.41 318456.41 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.547 rmmod nvme_tcp 00:20:27.547 rmmod nvme_fabrics 00:20:27.547 rmmod nvme_keyring 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2828323 ']' 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2828323 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2828323 ']' 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2828323 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2828323 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2828323' 00:20:27.547 killing process with pid 2828323 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2828323 00:20:27.547 18:03:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2828323 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.114 18:03:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.646 00:20:30.646 real 0m12.380s 00:20:30.646 user 0m35.624s 00:20:30.646 sys 0m3.376s 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.646 ************************************ 00:20:30.646 END TEST nvmf_shutdown_tc1 00:20:30.646 ************************************ 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:30.646 ************************************ 00:20:30.646 START TEST nvmf_shutdown_tc2 00:20:30.646 ************************************ 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.646 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:30.647 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:30.647 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:30.647 Found net devices under 0000:09:00.0: cvl_0_0 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:30.647 Found net devices under 0000:09:00.1: cvl_0_1 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.647 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:30.648 00:20:30.648 --- 10.0.0.2 ping statistics --- 00:20:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.648 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:20:30.648 00:20:30.648 --- 10.0.0.1 ping statistics --- 00:20:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.648 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2830203 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2830203 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2830203 ']' 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.648 18:03:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.648 [2024-07-24 18:03:16.664778] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:30.648 [2024-07-24 18:03:16.664849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.648 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.648 [2024-07-24 18:03:16.743815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.648 [2024-07-24 18:03:16.878881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.648 [2024-07-24 18:03:16.878934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.648 [2024-07-24 18:03:16.878966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.648 [2024-07-24 18:03:16.878989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.648 [2024-07-24 18:03:16.879008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.648 [2024-07-24 18:03:16.879116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.648 [2024-07-24 18:03:16.879228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.648 [2024-07-24 18:03:16.879293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:30.648 [2024-07-24 18:03:16.879303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.906 [2024-07-24 18:03:17.041471] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:30.906 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:30.907 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.907 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:30.907 Malloc1 00:20:30.907 [2024-07-24 18:03:17.116770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.907 Malloc2 00:20:31.164 Malloc3 00:20:31.164 Malloc4 00:20:31.164 Malloc5 00:20:31.164 Malloc6 00:20:31.164 Malloc7 00:20:31.423 Malloc8 00:20:31.423 Malloc9 00:20:31.423 Malloc10 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2830382 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2830382 /var/tmp/bdevperf.sock 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2830382 ']' 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.423 "hdgst": ${hdgst:-false}, 00:20:31.423 "ddgst": ${ddgst:-false} 00:20:31.423 }, 00:20:31.423 "method": "bdev_nvme_attach_controller" 00:20:31.423 } 00:20:31.423 EOF 00:20:31.423 )") 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.423 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.423 { 00:20:31.423 "params": { 00:20:31.423 "name": "Nvme$subsystem", 00:20:31.423 "trtype": "$TEST_TRANSPORT", 00:20:31.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.423 "adrfam": "ipv4", 00:20:31.423 "trsvcid": "$NVMF_PORT", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.424 "hdgst": ${hdgst:-false}, 00:20:31.424 "ddgst": ${ddgst:-false} 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 } 00:20:31.424 EOF 00:20:31.424 )") 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.424 { 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme$subsystem", 00:20:31.424 "trtype": "$TEST_TRANSPORT", 00:20:31.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "$NVMF_PORT", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.424 "hdgst": ${hdgst:-false}, 00:20:31.424 "ddgst": ${ddgst:-false} 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 } 00:20:31.424 EOF 00:20:31.424 )") 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.424 { 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme$subsystem", 00:20:31.424 "trtype": "$TEST_TRANSPORT", 00:20:31.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "$NVMF_PORT", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.424 "hdgst": ${hdgst:-false}, 00:20:31.424 "ddgst": ${ddgst:-false} 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 } 00:20:31.424 EOF 00:20:31.424 )") 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:31.424 18:03:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme1", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme2", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme3", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme4", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme5", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme6", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme7", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme8", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme9", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 },{ 00:20:31.424 "params": { 00:20:31.424 "name": "Nvme10", 00:20:31.424 "trtype": "tcp", 00:20:31.424 "traddr": "10.0.0.2", 00:20:31.424 "adrfam": "ipv4", 00:20:31.424 "trsvcid": "4420", 00:20:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:31.424 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:31.424 "hdgst": false, 00:20:31.424 "ddgst": false 00:20:31.424 }, 00:20:31.424 "method": "bdev_nvme_attach_controller" 00:20:31.424 }' 00:20:31.424 [2024-07-24 18:03:17.646850] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:31.424 [2024-07-24 18:03:17.646938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830382 ] 00:20:31.424 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.682 [2024-07-24 18:03:17.709326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.683 [2024-07-24 18:03:17.817711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.052 Running I/O for 10 seconds... 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:33.617 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.875 18:03:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2830382 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2830382 ']' 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2830382 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830382 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830382' 00:20:33.875 killing process with pid 2830382 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2830382 00:20:33.875 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2830382 00:20:33.875 Received shutdown signal, test time was about 0.914133 seconds 00:20:33.875 00:20:33.875 Latency(us) 00:20:33.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme1n1 : 0.90 213.81 13.36 0.00 0.00 295743.08 22233.69 282727.16 00:20:33.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme2n1 : 0.87 226.98 14.19 0.00 0.00 269590.78 4053.52 271853.04 00:20:33.875 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme3n1 : 0.87 221.65 13.85 0.00 0.00 272883.11 36311.80 234570.33 00:20:33.875 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme4n1 : 0.88 219.30 13.71 0.00 0.00 269884.43 34369.99 250104.79 00:20:33.875 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme5n1 : 0.91 211.80 13.24 0.00 0.00 274300.02 23204.60 274959.93 00:20:33.875 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme6n1 : 0.89 219.15 13.70 0.00 0.00 258302.12 2087.44 276513.37 00:20:33.875 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme7n1 : 0.88 217.85 13.62 0.00 0.00 253942.14 18932.62 282727.16 00:20:33.875 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme8n1 : 0.89 216.10 13.51 0.00 0.00 250533.36 20874.43 256318.58 00:20:33.875 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme9n1 : 0.91 208.03 13.00 0.00 0.00 254487.91 22427.88 313796.08 00:20:33.875 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:33.875 Verification LBA range: start 0x0 length 0x400 00:20:33.875 Nvme10n1 : 0.90 212.51 13.28 0.00 0.00 243841.96 20583.16 279620.27 00:20:33.875 =================================================================================================================== 00:20:33.875 Total : 2167.17 135.45 0.00 0.00 264364.88 2087.44 313796.08 00:20:34.133 18:03:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.506 rmmod nvme_tcp 00:20:35.506 rmmod nvme_fabrics 00:20:35.506 rmmod nvme_keyring 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2830203 ']' 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2830203 ']' 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830203' 00:20:35.506 killing process with pid 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2830203 00:20:35.506 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2830203 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.766 18:03:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.299 00:20:38.299 real 0m7.574s 00:20:38.299 user 0m22.388s 00:20:38.299 sys 0m1.499s 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.299 ************************************ 00:20:38.299 END TEST nvmf_shutdown_tc2 00:20:38.299 ************************************ 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:38.299 ************************************ 00:20:38.299 START TEST nvmf_shutdown_tc3 00:20:38.299 ************************************ 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.299 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:38.300 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:38.300 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:38.300 Found net devices under 0000:09:00.0: cvl_0_0 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:38.300 Found net devices under 0000:09:00.1: cvl_0_1 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.300 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:20:38.301 00:20:38.301 --- 10.0.0.2 ping statistics --- 00:20:38.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.301 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:38.301 00:20:38.301 --- 10.0.0.1 ping statistics --- 00:20:38.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.301 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2831296 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2831296 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2831296 ']' 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.301 18:03:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:38.301 [2024-07-24 18:03:24.286739] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:38.301 [2024-07-24 18:03:24.286846] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.301 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.301 [2024-07-24 18:03:24.354822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.301 [2024-07-24 18:03:24.473533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.301 [2024-07-24 18:03:24.473610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.301 [2024-07-24 18:03:24.473627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.301 [2024-07-24 18:03:24.473641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.301 [2024-07-24 18:03:24.473652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.301 [2024-07-24 18:03:24.473736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.301 [2024-07-24 18:03:24.473851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.301 [2024-07-24 18:03:24.473920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.301 [2024-07-24 18:03:24.473922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.234 [2024-07-24 18:03:25.244684] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.234 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.234 Malloc1 00:20:39.234 [2024-07-24 18:03:25.320707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.234 Malloc2 00:20:39.234 Malloc3 00:20:39.234 Malloc4 00:20:39.234 Malloc5 00:20:39.492 Malloc6 00:20:39.492 Malloc7 00:20:39.492 Malloc8 00:20:39.492 Malloc9 00:20:39.492 Malloc10 00:20:39.492 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.492 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:39.492 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.492 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2831483 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2831483 /var/tmp/bdevperf.sock 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2831483 ']' 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.751 { 00:20:39.751 "params": { 00:20:39.751 "name": "Nvme$subsystem", 00:20:39.751 "trtype": "$TEST_TRANSPORT", 00:20:39.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.751 "adrfam": "ipv4", 00:20:39.751 "trsvcid": "$NVMF_PORT", 00:20:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.751 "hdgst": ${hdgst:-false}, 00:20:39.751 "ddgst": ${ddgst:-false} 00:20:39.751 }, 00:20:39.751 "method": "bdev_nvme_attach_controller" 00:20:39.751 } 00:20:39.751 EOF 00:20:39.751 )") 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.751 { 00:20:39.751 "params": { 00:20:39.751 "name": "Nvme$subsystem", 00:20:39.751 "trtype": "$TEST_TRANSPORT", 00:20:39.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.751 "adrfam": "ipv4", 00:20:39.751 "trsvcid": "$NVMF_PORT", 00:20:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.751 "hdgst": ${hdgst:-false}, 00:20:39.751 "ddgst": ${ddgst:-false} 00:20:39.751 }, 00:20:39.751 "method": "bdev_nvme_attach_controller" 00:20:39.751 } 00:20:39.751 EOF 00:20:39.751 )") 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.751 { 00:20:39.751 "params": { 00:20:39.751 "name": "Nvme$subsystem", 00:20:39.751 "trtype": "$TEST_TRANSPORT", 00:20:39.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.751 "adrfam": "ipv4", 00:20:39.751 "trsvcid": "$NVMF_PORT", 00:20:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.751 "hdgst": ${hdgst:-false}, 00:20:39.751 "ddgst": ${ddgst:-false} 00:20:39.751 }, 00:20:39.751 "method": "bdev_nvme_attach_controller" 00:20:39.751 } 00:20:39.751 EOF 00:20:39.751 )") 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.751 { 00:20:39.751 "params": { 00:20:39.751 "name": "Nvme$subsystem", 00:20:39.751 "trtype": "$TEST_TRANSPORT", 00:20:39.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.751 "adrfam": "ipv4", 00:20:39.751 "trsvcid": "$NVMF_PORT", 00:20:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.751 "hdgst": ${hdgst:-false}, 00:20:39.751 "ddgst": ${ddgst:-false} 00:20:39.751 }, 00:20:39.751 "method": "bdev_nvme_attach_controller" 00:20:39.751 } 00:20:39.751 EOF 00:20:39.751 )") 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.751 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.751 { 00:20:39.751 "params": { 00:20:39.751 "name": "Nvme$subsystem", 00:20:39.751 "trtype": "$TEST_TRANSPORT", 00:20:39.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.751 "adrfam": "ipv4", 00:20:39.751 "trsvcid": "$NVMF_PORT", 00:20:39.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.751 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.752 { 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme$subsystem", 00:20:39.752 "trtype": "$TEST_TRANSPORT", 00:20:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "$NVMF_PORT", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.752 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.752 { 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme$subsystem", 00:20:39.752 "trtype": "$TEST_TRANSPORT", 00:20:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "$NVMF_PORT", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.752 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.752 { 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme$subsystem", 00:20:39.752 "trtype": "$TEST_TRANSPORT", 00:20:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "$NVMF_PORT", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.752 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.752 { 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme$subsystem", 00:20:39.752 "trtype": "$TEST_TRANSPORT", 00:20:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "$NVMF_PORT", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.752 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.752 { 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme$subsystem", 00:20:39.752 "trtype": "$TEST_TRANSPORT", 00:20:39.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "$NVMF_PORT", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.752 "hdgst": ${hdgst:-false}, 00:20:39.752 "ddgst": ${ddgst:-false} 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 } 00:20:39.752 EOF 00:20:39.752 )") 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:39.752 18:03:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme1", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme2", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme3", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme4", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme5", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme6", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme7", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme8", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:39.752 "hdgst": false, 00:20:39.752 "ddgst": false 00:20:39.752 }, 00:20:39.752 "method": "bdev_nvme_attach_controller" 00:20:39.752 },{ 00:20:39.752 "params": { 00:20:39.752 "name": "Nvme9", 00:20:39.752 "trtype": "tcp", 00:20:39.752 "traddr": "10.0.0.2", 00:20:39.752 "adrfam": "ipv4", 00:20:39.752 "trsvcid": "4420", 00:20:39.752 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:39.752 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:39.752 "hdgst": false, 00:20:39.753 "ddgst": false 00:20:39.753 }, 00:20:39.753 "method": "bdev_nvme_attach_controller" 00:20:39.753 },{ 00:20:39.753 "params": { 00:20:39.753 "name": "Nvme10", 00:20:39.753 "trtype": "tcp", 00:20:39.753 "traddr": "10.0.0.2", 00:20:39.753 "adrfam": "ipv4", 00:20:39.753 "trsvcid": "4420", 00:20:39.753 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:39.753 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:39.753 "hdgst": false, 00:20:39.753 "ddgst": false 00:20:39.753 }, 00:20:39.753 "method": "bdev_nvme_attach_controller" 00:20:39.753 }' 00:20:39.753 [2024-07-24 18:03:25.829183] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:39.753 [2024-07-24 18:03:25.829263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831483 ] 00:20:39.753 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.753 [2024-07-24 18:03:25.891303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.753 [2024-07-24 18:03:25.999197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.651 Running I/O for 10 seconds... 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:41.651 18:03:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:41.940 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:42.208 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2831296 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2831296 ']' 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2831296 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2831296 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2831296' 00:20:42.477 killing process with pid 2831296 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2831296 00:20:42.477 18:03:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2831296 00:20:42.477 [2024-07-24 18:03:28.504727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.504988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.477 [2024-07-24 18:03:28.505513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.505641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190a920 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.506488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.478 [2024-07-24 18:03:28.506527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.478 [2024-07-24 18:03:28.506601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.478 [2024-07-24 18:03:28.506675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.478 [2024-07-24 18:03:28.506696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.478 [2024-07-24 18:03:28.506814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.478 [2024-07-24 18:03:28.506831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.478 [2024-07-24 18:03:28.506845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.478 [2024-07-24 18:03:28.506858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139c830 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.507990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.508002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.508014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.508026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.508038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.478 [2024-07-24 18:03:28.508050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190d440 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.511989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.512067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.512185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.512533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-07-24 18:03:28.512599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.512627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.512639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.512652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1[2024-07-24 18:03:28.512715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.479 [2024-07-24 18:03:28.512741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.479 [2024-07-24 18:03:28.512751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.479 [2024-07-24 18:03:28.512754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.512767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with [2024-07-24 18:03:28.512781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1the state(6) to be set 00:20:42.480 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.512845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.512857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.512873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 18:03:28.512887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.512913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.512926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.512939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.512951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with [2024-07-24 18:03:28.512963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(6) to be set 00:20:42.480 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.512979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.512992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.512996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b2a0 is same with the state(6) to be set 00:20:42.480 [2024-07-24 18:03:28.513086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.480 [2024-07-24 18:03:28.513708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.480 [2024-07-24 18:03:28.513721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.513981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.513994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.481 [2024-07-24 18:03:28.514198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.481 [2024-07-24 18:03:28.514229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-07-24 18:03:28.514257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with ce or address) on qpair id 1 00:20:42.481 the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514326] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1522cb0 was disconnected and freed. reset controller. 00:20:42.481 [2024-07-24 18:03:28.514333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.481 [2024-07-24 18:03:28.514596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.514989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190b780 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190c5e0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.517989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.482 [2024-07-24 18:03:28.518560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.518703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190caa0 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.519992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.520251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190cf60 is same with the state(6) to be set 00:20:42.483 [2024-07-24 18:03:28.534726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.483 [2024-07-24 18:03:28.534804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.483 [2024-07-24 18:03:28.534848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.483 [2024-07-24 18:03:28.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.483 [2024-07-24 18:03:28.534881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.483 [2024-07-24 18:03:28.534896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.483 [2024-07-24 18:03:28.534911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.534926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.534942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.534956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.534972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.534987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.535983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.536012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.536026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.536042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.536056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.536072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.536097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.536120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.536135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.484 [2024-07-24 18:03:28.536151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.484 [2024-07-24 18:03:28.536165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.485 [2024-07-24 18:03:28.536811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.536865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:42.485 [2024-07-24 18:03:28.536947] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1444ae0 was disconnected and freed. reset controller. 00:20:42.485 [2024-07-24 18:03:28.537411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563570 is same with the state(6) to be set 00:20:42.485 [2024-07-24 18:03:28.537584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554cd0 is same with the state(6) to be set 00:20:42.485 [2024-07-24 18:03:28.537751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.537903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.537916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e610 is same with the state(6) to be set 00:20:42.485 [2024-07-24 18:03:28.537965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.538041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.538069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.538095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.538127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.538162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.485 [2024-07-24 18:03:28.538176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.485 [2024-07-24 18:03:28.538189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0b50 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.538232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf400 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.538439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb910 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.538621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14717a0 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.538807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.538923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.538945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c80 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.538979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139c830 (9): Bad file descriptor 00:20:42.486 [2024-07-24 18:03:28.539029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.539064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.539077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.539095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.539138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.486 [2024-07-24 18:03:28.539152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.539164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc280 is same with the state(6) to be set 00:20:42.486 [2024-07-24 18:03:28.539260] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:42.486 [2024-07-24 18:03:28.540652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.486 [2024-07-24 18:03:28.540984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.486 [2024-07-24 18:03:28.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.541979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.541995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.487 [2024-07-24 18:03:28.542370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.487 [2024-07-24 18:03:28.542393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.542916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.542931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517d40 is same with the state(6) to be set 00:20:42.488 [2024-07-24 18:03:28.543013] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1517d40 was disconnected and freed. reset controller. 00:20:42.488 [2024-07-24 18:03:28.544603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:42.488 [2024-07-24 18:03:28.544657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:42.488 [2024-07-24 18:03:28.544684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cb910 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.544709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0b50 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.547823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:42.488 [2024-07-24 18:03:28.547872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14717a0 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.547931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1563570 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.547967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554cd0 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.548002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e610 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.548034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf400 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.548068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6c80 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.548130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cc280 (9): Bad file descriptor 00:20:42.488 [2024-07-24 18:03:28.549220] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1398050 was disconnected and freed. reset controller. 00:20:42.488 [2024-07-24 18:03:28.549474] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:42.488 [2024-07-24 18:03:28.549681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.488 [2024-07-24 18:03:28.549712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c0b50 with addr=10.0.0.2, port=4420 00:20:42.488 [2024-07-24 18:03:28.549729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0b50 is same with the state(6) to be set 00:20:42.488 [2024-07-24 18:03:28.549858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.488 [2024-07-24 18:03:28.549885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cb910 with addr=10.0.0.2, port=4420 00:20:42.488 [2024-07-24 18:03:28.549901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb910 is same with the state(6) to be set 00:20:42.488 [2024-07-24 18:03:28.549987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.488 [2024-07-24 18:03:28.550516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.488 [2024-07-24 18:03:28.550530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.550971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.550985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.489 [2024-07-24 18:03:28.551869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.489 [2024-07-24 18:03:28.551885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.551915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.551928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.551944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.551979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.551994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.552210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.552233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1521920 is same with the state(6) to be set 00:20:42.490 [2024-07-24 18:03:28.553586] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:42.490 [2024-07-24 18:03:28.553638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.553978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.553992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.490 [2024-07-24 18:03:28.554452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.490 [2024-07-24 18:03:28.554467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.554972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.554986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.491 [2024-07-24 18:03:28.555796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.491 [2024-07-24 18:03:28.555810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.555826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.555942] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1443ba0 was disconnected and freed. reset controller. 00:20:42.492 [2024-07-24 18:03:28.556670] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:42.492 [2024-07-24 18:03:28.556753] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:42.492 [2024-07-24 18:03:28.556806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:42.492 [2024-07-24 18:03:28.556859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:42.492 [2024-07-24 18:03:28.557056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.492 [2024-07-24 18:03:28.557083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14717a0 with addr=10.0.0.2, port=4420 00:20:42.492 [2024-07-24 18:03:28.557099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14717a0 is same with the state(6) to be set 00:20:42.492 [2024-07-24 18:03:28.557146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0b50 (9): Bad file descriptor 00:20:42.492 [2024-07-24 18:03:28.557177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cb910 (9): Bad file descriptor 00:20:42.492 [2024-07-24 18:03:28.558518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:42.492 [2024-07-24 18:03:28.558681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.492 [2024-07-24 18:03:28.558708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139c830 with addr=10.0.0.2, port=4420 00:20:42.492 [2024-07-24 18:03:28.558725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139c830 is same with the state(6) to be set 00:20:42.492 [2024-07-24 18:03:28.558846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.492 [2024-07-24 18:03:28.558871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e610 with addr=10.0.0.2, port=4420 00:20:42.492 [2024-07-24 18:03:28.558887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e610 is same with the state(6) to be set 00:20:42.492 [2024-07-24 18:03:28.558906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14717a0 (9): Bad file descriptor 00:20:42.492 [2024-07-24 18:03:28.558923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:42.492 [2024-07-24 18:03:28.558936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:42.492 [2024-07-24 18:03:28.558951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:42.492 [2024-07-24 18:03:28.558971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:42.492 [2024-07-24 18:03:28.558984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:42.492 [2024-07-24 18:03:28.558996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:42.492 [2024-07-24 18:03:28.559066] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:42.492 [2024-07-24 18:03:28.559093] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:42.492 [2024-07-24 18:03:28.559469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:42.492 [2024-07-24 18:03:28.559492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:42.492 [2024-07-24 18:03:28.559641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.492 [2024-07-24 18:03:28.559675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bf400 with addr=10.0.0.2, port=4420 00:20:42.492 [2024-07-24 18:03:28.559694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf400 is same with the state(6) to be set 00:20:42.492 [2024-07-24 18:03:28.559713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139c830 (9): Bad file descriptor 00:20:42.492 [2024-07-24 18:03:28.559732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e610 (9): Bad file descriptor 00:20:42.492 [2024-07-24 18:03:28.559747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:42.492 [2024-07-24 18:03:28.559760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:42.492 [2024-07-24 18:03:28.559772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:42.492 [2024-07-24 18:03:28.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.559872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.559908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.559928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.559945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.559960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.559976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.559991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.492 [2024-07-24 18:03:28.560621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.492 [2024-07-24 18:03:28.560635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.560973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.560988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.493 [2024-07-24 18:03:28.561961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.493 [2024-07-24 18:03:28.561975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.561991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.562004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.562020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.562039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.562054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524160 is same with the state(6) to be set 00:20:42.494 [2024-07-24 18:03:28.563664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.563973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.563986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.494 [2024-07-24 18:03:28.564203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.494 [2024-07-24 18:03:28.564217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:43.063 18:03:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:43.063 [2024-07-24 18:03:29.043871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.043925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.043953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.063 [2024-07-24 18:03:29.044440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.063 [2024-07-24 18:03:29.044480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.044966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.044984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.045968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.045994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.046018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.046044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.046068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.046118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.046143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.046168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.046190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.046214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.046236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.046260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445a60 is same with the state(6) to be set 00:20:43.064 [2024-07-24 18:03:29.047830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.047861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.064 [2024-07-24 18:03:29.047898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.064 [2024-07-24 18:03:29.047924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.047957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.047978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.065 [2024-07-24 18:03:29.049820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.065 [2024-07-24 18:03:29.049837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.049857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.049875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.049895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.049912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.049932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.049950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.049971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.049991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.050961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.050981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.051002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446e00 is same with the state(6) to be set 00:20:43.066 [2024-07-24 18:03:29.052864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.052936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.052968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.052993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.066 [2024-07-24 18:03:29.053691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.066 [2024-07-24 18:03:29.053713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.053755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.053799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.053853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.053957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.053980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.054963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.054983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.067 [2024-07-24 18:03:29.055640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.067 [2024-07-24 18:03:29.055660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.055964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.055986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.056010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.056030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.068 [2024-07-24 18:03:29.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.068 [2024-07-24 18:03:29.056098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143bfe0 is same with the state(6) to be set 00:20:43.068 [2024-07-24 18:03:29.057964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.068 [2024-07-24 18:03:29.058002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:43.068 [2024-07-24 18:03:29.058032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:43.068 [2024-07-24 18:03:29.058071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:43.068 task offset: 24576 on job bdev=Nvme2n1 fails 00:20:43.068 00:20:43.068 Latency(us) 00:20:43.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme1n1 ended in about 1.00 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme1n1 : 1.00 128.23 8.01 64.11 0.00 329426.87 19029.71 290494.39 00:20:43.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme2n1 ended in about 0.99 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme2n1 : 0.99 194.88 12.18 64.96 0.00 239122.20 23690.05 276513.37 00:20:43.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme3n1 ended in about 1.01 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme3n1 : 1.01 126.98 7.94 63.49 0.00 320395.19 39224.51 262532.36 00:20:43.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme4n1 ended in about 1.00 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme4n1 : 1.00 191.38 11.96 63.79 0.00 234473.53 8835.22 285834.05 00:20:43.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme5n1 ended in about 0.99 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme5n1 : 0.99 193.77 12.11 64.59 0.00 226796.56 11165.39 276513.37 00:20:43.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme6n1 ended in about 0.99 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme6n1 : 0.99 194.13 12.13 64.71 0.00 221757.72 8689.59 259425.47 00:20:43.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme7n1 : 0.99 193.47 12.09 0.00 0.00 290857.21 17185.00 281173.71 00:20:43.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme8n1 ended in about 1.49 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme8n1 : 1.49 85.84 5.37 42.92 0.00 451265.17 20971.52 723905.80 00:20:43.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme9n1 ended in about 1.50 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme9n1 : 1.50 88.24 5.52 42.78 0.00 437713.36 18544.26 633805.94 00:20:43.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.068 Job: Nvme10n1 ended in about 1.50 seconds with error 00:20:43.068 Verification LBA range: start 0x0 length 0x400 00:20:43.068 Nvme10n1 : 1.50 85.28 5.33 42.64 0.00 442665.84 24855.13 705264.45 00:20:43.068 =================================================================================================================== 00:20:43.068 Total : 1482.20 92.64 514.00 0.00 309223.58 8689.59 723905.80 00:20:43.068 [2024-07-24 18:03:29.085006] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:43.068 [2024-07-24 18:03:29.085195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bf400 (9): Bad file descriptor 00:20:43.068 [2024-07-24 18:03:29.085242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.068 [2024-07-24 18:03:29.085259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:43.068 [2024-07-24 18:03:29.085279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.068 [2024-07-24 18:03:29.085311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:43.068 [2024-07-24 18:03:29.085330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:43.068 [2024-07-24 18:03:29.085360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:43.068 [2024-07-24 18:03:29.085448] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.068 [2024-07-24 18:03:29.085477] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.068 [2024-07-24 18:03:29.085505] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.068 [2024-07-24 18:03:29.085548] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.068 [2024-07-24 18:03:29.085729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:43.068 [2024-07-24 18:03:29.085776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.068 [2024-07-24 18:03:29.085806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.068 [2024-07-24 18:03:29.086219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.068 [2024-07-24 18:03:29.086262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c6c80 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.086290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c6c80 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.086550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.086585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1563570 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.086608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1563570 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.086786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.086834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc280 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.086868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc280 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.086891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.086909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.086944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:43.069 [2024-07-24 18:03:29.086991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.069 [2024-07-24 18:03:29.087055] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.069 [2024-07-24 18:03:29.087098] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.069 [2024-07-24 18:03:29.087159] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:43.069 [2024-07-24 18:03:29.088368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:43.069 [2024-07-24 18:03:29.088401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:43.069 [2024-07-24 18:03:29.088437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:43.069 [2024-07-24 18:03:29.088492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.088743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.088779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554cd0 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.088805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554cd0 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.088835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c6c80 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.088866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1563570 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.088894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cc280 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.089015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:43.069 [2024-07-24 18:03:29.089058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:43.069 [2024-07-24 18:03:29.089400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.089450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14717a0 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.089481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14717a0 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.089712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.089746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cb910 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.089769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cb910 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.089994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.090027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c0b50 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.090050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0b50 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.090100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554cd0 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.090140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.090161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.090180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:43.069 [2024-07-24 18:03:29.090221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.090239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.090257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:43.069 [2024-07-24 18:03:29.090278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.090297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.090314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:43.069 [2024-07-24 18:03:29.090754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.090780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.090800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.091040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.091078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9e610 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.091144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9e610 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.091499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:43.069 [2024-07-24 18:03:29.091537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x139c830 with addr=10.0.0.2, port=4420 00:20:43.069 [2024-07-24 18:03:29.091564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139c830 is same with the state(6) to be set 00:20:43.069 [2024-07-24 18:03:29.091596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14717a0 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.091630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cb910 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.091678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0b50 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.091706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.091748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.091771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:43.069 [2024-07-24 18:03:29.091851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.091883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9e610 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.091912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139c830 (9): Bad file descriptor 00:20:43.069 [2024-07-24 18:03:29.091938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.091959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.091980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:43.069 [2024-07-24 18:03:29.092005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.092026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.092047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:43.069 [2024-07-24 18:03:29.092072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.092094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.092140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:43.069 [2024-07-24 18:03:29.092189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.092214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.092234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.092253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.092272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.092292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:43.069 [2024-07-24 18:03:29.092318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.069 [2024-07-24 18:03:29.092338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:43.069 [2024-07-24 18:03:29.092354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.069 [2024-07-24 18:03:29.092399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:43.069 [2024-07-24 18:03:29.092432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2831483 00:20:44.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2831483) - No such process 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.006 rmmod nvme_tcp 00:20:44.006 rmmod nvme_fabrics 00:20:44.006 rmmod nvme_keyring 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.006 18:03:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.911 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.912 00:20:45.912 real 0m8.074s 00:20:45.912 user 0m20.752s 00:20:45.912 sys 0m1.556s 00:20:45.912 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.912 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.912 ************************************ 00:20:45.912 END TEST nvmf_shutdown_tc3 00:20:45.912 ************************************ 00:20:45.912 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:45.912 00:20:45.912 real 0m28.244s 00:20:45.912 user 1m18.846s 00:20:45.912 sys 0m6.581s 00:20:45.912 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.912 18:03:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:45.912 ************************************ 00:20:45.912 END TEST nvmf_shutdown 00:20:45.912 ************************************ 00:20:46.170 18:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:20:46.170 00:20:46.170 real 10m42.154s 00:20:46.170 user 25m22.408s 00:20:46.170 sys 2m40.405s 00:20:46.170 18:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.170 18:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.170 ************************************ 00:20:46.170 END TEST nvmf_target_extra 00:20:46.170 ************************************ 00:20:46.170 18:03:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:46.170 18:03:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.170 18:03:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.170 18:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.170 ************************************ 00:20:46.170 START TEST nvmf_host 00:20:46.170 ************************************ 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:46.170 * Looking for test storage... 00:20:46.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.170 18:03:32 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.171 ************************************ 00:20:46.171 START TEST nvmf_multicontroller 00:20:46.171 ************************************ 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.171 * Looking for test storage... 00:20:46.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.171 18:03:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.072 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:48.073 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:48.073 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:48.073 Found net devices under 0000:09:00.0: cvl_0_0 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:48.073 Found net devices under 0000:09:00.1: cvl_0_1 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.073 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:20:48.331 00:20:48.331 --- 10.0.0.2 ping statistics --- 00:20:48.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.331 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:20:48.331 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:48.331 00:20:48.332 --- 10.0.0.1 ping statistics --- 00:20:48.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.332 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2834044 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2834044 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2834044 ']' 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.332 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.332 [2024-07-24 18:03:34.539717] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:48.332 [2024-07-24 18:03:34.539789] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.332 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.590 [2024-07-24 18:03:34.604627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.590 [2024-07-24 18:03:34.713471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.590 [2024-07-24 18:03:34.713527] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.590 [2024-07-24 18:03:34.713555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.590 [2024-07-24 18:03:34.713567] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.590 [2024-07-24 18:03:34.713577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.590 [2024-07-24 18:03:34.713728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.590 [2024-07-24 18:03:34.713792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.590 [2024-07-24 18:03:34.713795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.590 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.590 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:48.590 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.590 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.590 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 [2024-07-24 18:03:34.867232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 Malloc0 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 [2024-07-24 18:03:34.927435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 [2024-07-24 18:03:34.935286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 Malloc1 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2834066 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2834066 /var/tmp/bdevperf.sock 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2834066 ']' 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.849 18:03:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.107 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:49.107 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:49.107 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.107 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.365 NVMe0n1 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.365 1 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.365 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.365 request: 00:20:49.365 { 00:20:49.365 "name": "NVMe0", 00:20:49.365 "trtype": "tcp", 00:20:49.365 "traddr": "10.0.0.2", 00:20:49.365 "adrfam": "ipv4", 00:20:49.365 "trsvcid": "4420", 00:20:49.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.365 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:49.365 "hostaddr": "10.0.0.2", 00:20:49.365 "hostsvcid": "60000", 00:20:49.365 "prchk_reftag": false, 00:20:49.365 "prchk_guard": false, 00:20:49.365 "hdgst": false, 00:20:49.365 "ddgst": false, 00:20:49.366 "method": "bdev_nvme_attach_controller", 00:20:49.366 "req_id": 1 00:20:49.366 } 00:20:49.366 Got JSON-RPC error response 00:20:49.366 response: 00:20:49.366 { 00:20:49.366 "code": -114, 00:20:49.366 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:49.366 } 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.366 request: 00:20:49.366 { 00:20:49.366 "name": "NVMe0", 00:20:49.366 "trtype": "tcp", 00:20:49.366 "traddr": "10.0.0.2", 00:20:49.366 "adrfam": "ipv4", 00:20:49.366 "trsvcid": "4420", 00:20:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.366 "hostaddr": "10.0.0.2", 00:20:49.366 "hostsvcid": "60000", 00:20:49.366 "prchk_reftag": false, 00:20:49.366 "prchk_guard": false, 00:20:49.366 "hdgst": false, 00:20:49.366 "ddgst": false, 00:20:49.366 "method": "bdev_nvme_attach_controller", 00:20:49.366 "req_id": 1 00:20:49.366 } 00:20:49.366 Got JSON-RPC error response 00:20:49.366 response: 00:20:49.366 { 00:20:49.366 "code": -114, 00:20:49.366 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:49.366 } 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.366 request: 00:20:49.366 { 00:20:49.366 "name": "NVMe0", 00:20:49.366 "trtype": "tcp", 00:20:49.366 "traddr": "10.0.0.2", 00:20:49.366 "adrfam": "ipv4", 00:20:49.366 "trsvcid": "4420", 00:20:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.366 "hostaddr": "10.0.0.2", 00:20:49.366 "hostsvcid": "60000", 00:20:49.366 "prchk_reftag": false, 00:20:49.366 "prchk_guard": false, 00:20:49.366 "hdgst": false, 00:20:49.366 "ddgst": false, 00:20:49.366 "multipath": "disable", 00:20:49.366 "method": "bdev_nvme_attach_controller", 00:20:49.366 "req_id": 1 00:20:49.366 } 00:20:49.366 Got JSON-RPC error response 00:20:49.366 response: 00:20:49.366 { 00:20:49.366 "code": -114, 00:20:49.366 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:49.366 } 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.366 request: 00:20:49.366 { 00:20:49.366 "name": "NVMe0", 00:20:49.366 "trtype": "tcp", 00:20:49.366 "traddr": "10.0.0.2", 00:20:49.366 "adrfam": "ipv4", 00:20:49.366 "trsvcid": "4420", 00:20:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.366 "hostaddr": "10.0.0.2", 00:20:49.366 "hostsvcid": "60000", 00:20:49.366 "prchk_reftag": false, 00:20:49.366 "prchk_guard": false, 00:20:49.366 "hdgst": false, 00:20:49.366 "ddgst": false, 00:20:49.366 "multipath": "failover", 00:20:49.366 "method": "bdev_nvme_attach_controller", 00:20:49.366 "req_id": 1 00:20:49.366 } 00:20:49.366 Got JSON-RPC error response 00:20:49.366 response: 00:20:49.366 { 00:20:49.366 "code": -114, 00:20:49.366 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:49.366 } 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.366 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.624 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.624 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.882 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:49.882 18:03:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.816 0 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2834066 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2834066 ']' 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2834066 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.816 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834066 00:20:51.074 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.074 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.074 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834066' 00:20:51.074 killing process with pid 2834066 00:20:51.074 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2834066 00:20:51.074 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2834066 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:20:51.332 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:51.332 [2024-07-24 18:03:35.039969] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:51.332 [2024-07-24 18:03:35.040052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834066 ] 00:20:51.332 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.332 [2024-07-24 18:03:35.100283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.332 [2024-07-24 18:03:35.210991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.332 [2024-07-24 18:03:35.919383] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name f5d7399f-1aa9-43ce-8851-d3949e043c99 already exists 00:20:51.332 [2024-07-24 18:03:35.919438] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:f5d7399f-1aa9-43ce-8851-d3949e043c99 alias for bdev NVMe1n1 00:20:51.332 [2024-07-24 18:03:35.919453] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:51.332 Running I/O for 1 seconds... 00:20:51.332 00:20:51.332 Latency(us) 00:20:51.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.332 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:51.332 NVMe0n1 : 1.00 19390.54 75.74 0.00 0.00 6590.51 4199.16 12815.93 00:20:51.332 =================================================================================================================== 00:20:51.332 Total : 19390.54 75.74 0.00 0.00 6590.51 4199.16 12815.93 00:20:51.332 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.332 00:20:51.332 Latency(us) 00:20:51.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.332 =================================================================================================================== 00:20:51.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.332 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.332 rmmod nvme_tcp 00:20:51.332 rmmod nvme_fabrics 00:20:51.332 rmmod nvme_keyring 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2834044 ']' 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2834044 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2834044 ']' 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2834044 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2834044 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2834044' 00:20:51.332 killing process with pid 2834044 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2834044 00:20:51.332 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2834044 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.590 18:03:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.123 00:20:54.123 real 0m7.518s 00:20:54.123 user 0m12.167s 00:20:54.123 sys 0m2.243s 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.123 ************************************ 00:20:54.123 END TEST nvmf_multicontroller 00:20:54.123 ************************************ 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.123 ************************************ 00:20:54.123 START TEST nvmf_aer 00:20:54.123 ************************************ 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:54.123 * Looking for test storage... 00:20:54.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.123 18:03:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:56.027 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:56.027 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.027 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:56.028 Found net devices under 0000:09:00.0: cvl_0_0 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:56.028 Found net devices under 0000:09:00.1: cvl_0_1 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.028 18:03:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:20:56.028 00:20:56.028 --- 10.0.0.2 ping statistics --- 00:20:56.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.028 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:20:56.028 00:20:56.028 --- 10.0.0.1 ping statistics --- 00:20:56.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.028 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2836278 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2836278 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2836278 ']' 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.028 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.028 [2024-07-24 18:03:42.134505] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:20:56.028 [2024-07-24 18:03:42.134579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.028 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.028 [2024-07-24 18:03:42.199637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.287 [2024-07-24 18:03:42.310421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.287 [2024-07-24 18:03:42.310477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.287 [2024-07-24 18:03:42.310490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.287 [2024-07-24 18:03:42.310501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.287 [2024-07-24 18:03:42.310511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.287 [2024-07-24 18:03:42.310598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.287 [2024-07-24 18:03:42.310664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.287 [2024-07-24 18:03:42.310730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.287 [2024-07-24 18:03:42.310733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 [2024-07-24 18:03:42.470635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 Malloc0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 [2024-07-24 18:03:42.523081] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.287 [ 00:20:56.287 { 00:20:56.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:56.287 "subtype": "Discovery", 00:20:56.287 "listen_addresses": [], 00:20:56.287 "allow_any_host": true, 00:20:56.287 "hosts": [] 00:20:56.287 }, 00:20:56.287 { 00:20:56.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.287 "subtype": "NVMe", 00:20:56.287 "listen_addresses": [ 00:20:56.287 { 00:20:56.287 "trtype": "TCP", 00:20:56.287 "adrfam": "IPv4", 00:20:56.287 "traddr": "10.0.0.2", 00:20:56.287 "trsvcid": "4420" 00:20:56.287 } 00:20:56.287 ], 00:20:56.287 "allow_any_host": true, 00:20:56.287 "hosts": [], 00:20:56.287 "serial_number": "SPDK00000000000001", 00:20:56.287 "model_number": "SPDK bdev Controller", 00:20:56.287 "max_namespaces": 2, 00:20:56.287 "min_cntlid": 1, 00:20:56.287 "max_cntlid": 65519, 00:20:56.287 "namespaces": [ 00:20:56.287 { 00:20:56.287 "nsid": 1, 00:20:56.287 "bdev_name": "Malloc0", 00:20:56.287 "name": "Malloc0", 00:20:56.287 "nguid": "C5B523982D35490DB5751795F9E8460A", 00:20:56.287 "uuid": "c5b52398-2d35-490d-b575-1795f9e8460a" 00:20:56.287 } 00:20:56.287 ] 00:20:56.287 } 00:20:56.287 ] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2836431 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:20:56.287 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:56.546 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.546 Malloc1 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.546 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.804 [ 00:20:56.804 { 00:20:56.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:56.804 "subtype": "Discovery", 00:20:56.804 "listen_addresses": [], 00:20:56.804 "allow_any_host": true, 00:20:56.804 "hosts": [] 00:20:56.804 }, 00:20:56.804 { 00:20:56.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.804 "subtype": "NVMe", 00:20:56.804 "listen_addresses": [ 00:20:56.804 { 00:20:56.804 "trtype": "TCP", 00:20:56.804 "adrfam": "IPv4", 00:20:56.804 "traddr": "10.0.0.2", 00:20:56.804 "trsvcid": "4420" 00:20:56.804 } 00:20:56.804 ], 00:20:56.804 "allow_any_host": true, 00:20:56.804 "hosts": [], 00:20:56.804 "serial_number": "SPDK00000000000001", 00:20:56.804 "model_number": "SPDK bdev Controller", 00:20:56.804 "max_namespaces": 2, 00:20:56.804 "min_cntlid": 1, 00:20:56.804 "max_cntlid": 65519, 00:20:56.804 "namespaces": [ 00:20:56.804 { 00:20:56.804 "nsid": 1, 00:20:56.804 "bdev_name": "Malloc0", 00:20:56.804 "name": "Malloc0", 00:20:56.804 "nguid": "C5B523982D35490DB5751795F9E8460A", 00:20:56.804 "uuid": "c5b52398-2d35-490d-b575-1795f9e8460a" 00:20:56.804 }, 00:20:56.804 { 00:20:56.804 "nsid": 2, 00:20:56.804 "bdev_name": "Malloc1", 00:20:56.804 "name": "Malloc1", 00:20:56.804 "nguid": "DD72D8E3FFB64BC3843ACC728EF76835", 00:20:56.804 "uuid": "dd72d8e3-ffb6-4bc3-843a-cc728ef76835" 00:20:56.804 } 00:20:56.804 ] 00:20:56.804 } 00:20:56.804 ] 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2836431 00:20:56.804 Asynchronous Event Request test 00:20:56.804 Attaching to 10.0.0.2 00:20:56.804 Attached to 10.0.0.2 00:20:56.804 Registering asynchronous event callbacks... 00:20:56.804 Starting namespace attribute notice tests for all controllers... 00:20:56.804 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:56.804 aer_cb - Changed Namespace 00:20:56.804 Cleaning up... 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.804 rmmod nvme_tcp 00:20:56.804 rmmod nvme_fabrics 00:20:56.804 rmmod nvme_keyring 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2836278 ']' 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2836278 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2836278 ']' 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2836278 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2836278 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2836278' 00:20:56.804 killing process with pid 2836278 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2836278 00:20:56.804 18:03:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2836278 00:20:57.062 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.062 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.062 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.063 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.063 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.063 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.063 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.063 18:03:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.594 00:20:59.594 real 0m5.383s 00:20:59.594 user 0m4.161s 00:20:59.594 sys 0m1.908s 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 ************************************ 00:20:59.594 END TEST nvmf_aer 00:20:59.594 ************************************ 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.594 18:03:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.594 ************************************ 00:20:59.594 START TEST nvmf_async_init 00:20:59.594 ************************************ 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:59.595 * Looking for test storage... 00:20:59.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3f911aac2db24d5da01d119edfb90ef5 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.595 18:03:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:01.496 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:01.496 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.496 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:01.497 Found net devices under 0000:09:00.0: cvl_0_0 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:01.497 Found net devices under 0000:09:00.1: cvl_0_1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:21:01.497 00:21:01.497 --- 10.0.0.2 ping statistics --- 00:21:01.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.497 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:21:01.497 00:21:01.497 --- 10.0.0.1 ping statistics --- 00:21:01.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.497 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2838361 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2838361 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2838361 ']' 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.497 18:03:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:01.497 [2024-07-24 18:03:47.487770] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:01.497 [2024-07-24 18:03:47.487875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.497 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.497 [2024-07-24 18:03:47.556470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.497 [2024-07-24 18:03:47.670595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.497 [2024-07-24 18:03:47.670666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.497 [2024-07-24 18:03:47.670690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.497 [2024-07-24 18:03:47.670704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.497 [2024-07-24 18:03:47.670716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.497 [2024-07-24 18:03:47.670755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 [2024-07-24 18:03:48.443723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 null0 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3f911aac2db24d5da01d119edfb90ef5 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 [2024-07-24 18:03:48.483959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.469 nvme0n1 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.469 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.470 [ 00:21:02.470 { 00:21:02.470 "name": "nvme0n1", 00:21:02.470 "aliases": [ 00:21:02.470 "3f911aac-2db2-4d5d-a01d-119edfb90ef5" 00:21:02.470 ], 00:21:02.470 "product_name": "NVMe disk", 00:21:02.470 "block_size": 512, 00:21:02.470 "num_blocks": 2097152, 00:21:02.470 "uuid": "3f911aac-2db2-4d5d-a01d-119edfb90ef5", 00:21:02.470 "assigned_rate_limits": { 00:21:02.470 "rw_ios_per_sec": 0, 00:21:02.470 "rw_mbytes_per_sec": 0, 00:21:02.470 "r_mbytes_per_sec": 0, 00:21:02.470 "w_mbytes_per_sec": 0 00:21:02.470 }, 00:21:02.470 "claimed": false, 00:21:02.470 "zoned": false, 00:21:02.470 "supported_io_types": { 00:21:02.470 "read": true, 00:21:02.470 "write": true, 00:21:02.470 "unmap": false, 00:21:02.470 "flush": true, 00:21:02.470 "reset": true, 00:21:02.470 "nvme_admin": true, 00:21:02.470 "nvme_io": true, 00:21:02.470 "nvme_io_md": false, 00:21:02.470 "write_zeroes": true, 00:21:02.470 "zcopy": false, 00:21:02.470 "get_zone_info": false, 00:21:02.470 "zone_management": false, 00:21:02.470 "zone_append": false, 00:21:02.470 "compare": true, 00:21:02.470 "compare_and_write": true, 00:21:02.470 "abort": true, 00:21:02.470 "seek_hole": false, 00:21:02.470 "seek_data": false, 00:21:02.470 "copy": true, 00:21:02.470 "nvme_iov_md": false 00:21:02.470 }, 00:21:02.470 "memory_domains": [ 00:21:02.470 { 00:21:02.470 "dma_device_id": "system", 00:21:02.470 "dma_device_type": 1 00:21:02.470 } 00:21:02.470 ], 00:21:02.470 "driver_specific": { 00:21:02.470 "nvme": [ 00:21:02.470 { 00:21:02.470 "trid": { 00:21:02.470 "trtype": "TCP", 00:21:02.470 "adrfam": "IPv4", 00:21:02.470 "traddr": "10.0.0.2", 00:21:02.470 "trsvcid": "4420", 00:21:02.470 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.470 }, 00:21:02.470 "ctrlr_data": { 00:21:02.470 "cntlid": 1, 00:21:02.470 "vendor_id": "0x8086", 00:21:02.470 "model_number": "SPDK bdev Controller", 00:21:02.470 "serial_number": "00000000000000000000", 00:21:02.470 "firmware_revision": "24.09", 00:21:02.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.470 "oacs": { 00:21:02.470 "security": 0, 00:21:02.470 "format": 0, 00:21:02.470 "firmware": 0, 00:21:02.470 "ns_manage": 0 00:21:02.470 }, 00:21:02.470 "multi_ctrlr": true, 00:21:02.470 "ana_reporting": false 00:21:02.470 }, 00:21:02.470 "vs": { 00:21:02.470 "nvme_version": "1.3" 00:21:02.470 }, 00:21:02.470 "ns_data": { 00:21:02.470 "id": 1, 00:21:02.470 "can_share": true 00:21:02.470 } 00:21:02.470 } 00:21:02.470 ], 00:21:02.470 "mp_policy": "active_passive" 00:21:02.470 } 00:21:02.470 } 00:21:02.470 ] 00:21:02.470 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.470 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:02.470 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.470 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.470 [2024-07-24 18:03:48.737071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:02.470 [2024-07-24 18:03:48.737180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16621d0 (9): Bad file descriptor 00:21:02.729 [2024-07-24 18:03:48.879258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 [ 00:21:02.729 { 00:21:02.729 "name": "nvme0n1", 00:21:02.729 "aliases": [ 00:21:02.729 "3f911aac-2db2-4d5d-a01d-119edfb90ef5" 00:21:02.729 ], 00:21:02.729 "product_name": "NVMe disk", 00:21:02.729 "block_size": 512, 00:21:02.729 "num_blocks": 2097152, 00:21:02.729 "uuid": "3f911aac-2db2-4d5d-a01d-119edfb90ef5", 00:21:02.729 "assigned_rate_limits": { 00:21:02.729 "rw_ios_per_sec": 0, 00:21:02.729 "rw_mbytes_per_sec": 0, 00:21:02.729 "r_mbytes_per_sec": 0, 00:21:02.729 "w_mbytes_per_sec": 0 00:21:02.729 }, 00:21:02.729 "claimed": false, 00:21:02.729 "zoned": false, 00:21:02.729 "supported_io_types": { 00:21:02.729 "read": true, 00:21:02.729 "write": true, 00:21:02.729 "unmap": false, 00:21:02.729 "flush": true, 00:21:02.729 "reset": true, 00:21:02.729 "nvme_admin": true, 00:21:02.729 "nvme_io": true, 00:21:02.729 "nvme_io_md": false, 00:21:02.729 "write_zeroes": true, 00:21:02.729 "zcopy": false, 00:21:02.729 "get_zone_info": false, 00:21:02.729 "zone_management": false, 00:21:02.729 "zone_append": false, 00:21:02.729 "compare": true, 00:21:02.729 "compare_and_write": true, 00:21:02.729 "abort": true, 00:21:02.729 "seek_hole": false, 00:21:02.729 "seek_data": false, 00:21:02.729 "copy": true, 00:21:02.729 "nvme_iov_md": false 00:21:02.729 }, 00:21:02.729 "memory_domains": [ 00:21:02.729 { 00:21:02.729 "dma_device_id": "system", 00:21:02.729 "dma_device_type": 1 00:21:02.729 } 00:21:02.729 ], 00:21:02.729 "driver_specific": { 00:21:02.729 "nvme": [ 00:21:02.729 { 00:21:02.729 "trid": { 00:21:02.729 "trtype": "TCP", 00:21:02.729 "adrfam": "IPv4", 00:21:02.729 "traddr": "10.0.0.2", 00:21:02.729 "trsvcid": "4420", 00:21:02.729 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.729 }, 00:21:02.729 "ctrlr_data": { 00:21:02.729 "cntlid": 2, 00:21:02.729 "vendor_id": "0x8086", 00:21:02.729 "model_number": "SPDK bdev Controller", 00:21:02.729 "serial_number": "00000000000000000000", 00:21:02.729 "firmware_revision": "24.09", 00:21:02.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.729 "oacs": { 00:21:02.729 "security": 0, 00:21:02.729 "format": 0, 00:21:02.729 "firmware": 0, 00:21:02.729 "ns_manage": 0 00:21:02.729 }, 00:21:02.729 "multi_ctrlr": true, 00:21:02.729 "ana_reporting": false 00:21:02.729 }, 00:21:02.729 "vs": { 00:21:02.729 "nvme_version": "1.3" 00:21:02.729 }, 00:21:02.729 "ns_data": { 00:21:02.729 "id": 1, 00:21:02.729 "can_share": true 00:21:02.729 } 00:21:02.729 } 00:21:02.729 ], 00:21:02.729 "mp_policy": "active_passive" 00:21:02.729 } 00:21:02.729 } 00:21:02.729 ] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oXote5PzMK 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oXote5PzMK 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 [2024-07-24 18:03:48.929889] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.729 [2024-07-24 18:03:48.930016] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oXote5PzMK 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 [2024-07-24 18:03:48.937910] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oXote5PzMK 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.729 18:03:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.729 [2024-07-24 18:03:48.945938] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.729 [2024-07-24 18:03:48.945997] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.987 nvme0n1 00:21:02.987 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.987 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:02.987 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.987 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.987 [ 00:21:02.987 { 00:21:02.987 "name": "nvme0n1", 00:21:02.987 "aliases": [ 00:21:02.987 "3f911aac-2db2-4d5d-a01d-119edfb90ef5" 00:21:02.987 ], 00:21:02.987 "product_name": "NVMe disk", 00:21:02.987 "block_size": 512, 00:21:02.987 "num_blocks": 2097152, 00:21:02.987 "uuid": "3f911aac-2db2-4d5d-a01d-119edfb90ef5", 00:21:02.987 "assigned_rate_limits": { 00:21:02.987 "rw_ios_per_sec": 0, 00:21:02.987 "rw_mbytes_per_sec": 0, 00:21:02.987 "r_mbytes_per_sec": 0, 00:21:02.987 "w_mbytes_per_sec": 0 00:21:02.987 }, 00:21:02.987 "claimed": false, 00:21:02.987 "zoned": false, 00:21:02.987 "supported_io_types": { 00:21:02.987 "read": true, 00:21:02.987 "write": true, 00:21:02.987 "unmap": false, 00:21:02.987 "flush": true, 00:21:02.987 "reset": true, 00:21:02.987 "nvme_admin": true, 00:21:02.987 "nvme_io": true, 00:21:02.987 "nvme_io_md": false, 00:21:02.987 "write_zeroes": true, 00:21:02.987 "zcopy": false, 00:21:02.987 "get_zone_info": false, 00:21:02.987 "zone_management": false, 00:21:02.987 "zone_append": false, 00:21:02.987 "compare": true, 00:21:02.987 "compare_and_write": true, 00:21:02.987 "abort": true, 00:21:02.987 "seek_hole": false, 00:21:02.987 "seek_data": false, 00:21:02.987 "copy": true, 00:21:02.987 "nvme_iov_md": false 00:21:02.987 }, 00:21:02.987 "memory_domains": [ 00:21:02.987 { 00:21:02.987 "dma_device_id": "system", 00:21:02.987 "dma_device_type": 1 00:21:02.987 } 00:21:02.987 ], 00:21:02.987 "driver_specific": { 00:21:02.988 "nvme": [ 00:21:02.988 { 00:21:02.988 "trid": { 00:21:02.988 "trtype": "TCP", 00:21:02.988 "adrfam": "IPv4", 00:21:02.988 "traddr": "10.0.0.2", 00:21:02.988 "trsvcid": "4421", 00:21:02.988 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:02.988 }, 00:21:02.988 "ctrlr_data": { 00:21:02.988 "cntlid": 3, 00:21:02.988 "vendor_id": "0x8086", 00:21:02.988 "model_number": "SPDK bdev Controller", 00:21:02.988 "serial_number": "00000000000000000000", 00:21:02.988 "firmware_revision": "24.09", 00:21:02.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.988 "oacs": { 00:21:02.988 "security": 0, 00:21:02.988 "format": 0, 00:21:02.988 "firmware": 0, 00:21:02.988 "ns_manage": 0 00:21:02.988 }, 00:21:02.988 "multi_ctrlr": true, 00:21:02.988 "ana_reporting": false 00:21:02.988 }, 00:21:02.988 "vs": { 00:21:02.988 "nvme_version": "1.3" 00:21:02.988 }, 00:21:02.988 "ns_data": { 00:21:02.988 "id": 1, 00:21:02.988 "can_share": true 00:21:02.988 } 00:21:02.988 } 00:21:02.988 ], 00:21:02.988 "mp_policy": "active_passive" 00:21:02.988 } 00:21:02.988 } 00:21:02.988 ] 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.oXote5PzMK 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.988 rmmod nvme_tcp 00:21:02.988 rmmod nvme_fabrics 00:21:02.988 rmmod nvme_keyring 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2838361 ']' 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2838361 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2838361 ']' 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2838361 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2838361 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2838361' 00:21:02.988 killing process with pid 2838361 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2838361 00:21:02.988 [2024-07-24 18:03:49.152744] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.988 [2024-07-24 18:03:49.152784] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.988 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2838361 00:21:03.246 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.246 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.247 18:03:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.773 00:21:05.773 real 0m6.142s 00:21:05.773 user 0m2.936s 00:21:05.773 sys 0m1.800s 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:05.773 ************************************ 00:21:05.773 END TEST nvmf_async_init 00:21:05.773 ************************************ 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.773 ************************************ 00:21:05.773 START TEST dma 00:21:05.773 ************************************ 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:05.773 * Looking for test storage... 00:21:05.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.773 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:05.774 00:21:05.774 real 0m0.072s 00:21:05.774 user 0m0.030s 00:21:05.774 sys 0m0.048s 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:05.774 ************************************ 00:21:05.774 END TEST dma 00:21:05.774 ************************************ 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.774 ************************************ 00:21:05.774 START TEST nvmf_identify 00:21:05.774 ************************************ 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:05.774 * Looking for test storage... 00:21:05.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.774 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.775 18:03:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.673 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:07.674 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:07.674 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:07.674 Found net devices under 0000:09:00.0: cvl_0_0 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:07.674 Found net devices under 0000:09:00.1: cvl_0_1 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:21:07.674 00:21:07.674 --- 10.0.0.2 ping statistics --- 00:21:07.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.674 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:21:07.674 00:21:07.674 --- 10.0.0.1 ping statistics --- 00:21:07.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.674 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2840614 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2840614 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2840614 ']' 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.674 18:03:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 [2024-07-24 18:03:53.823229] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:07.674 [2024-07-24 18:03:53.823309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.674 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.674 [2024-07-24 18:03:53.893470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.932 [2024-07-24 18:03:54.015430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.932 [2024-07-24 18:03:54.015487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.932 [2024-07-24 18:03:54.015504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.932 [2024-07-24 18:03:54.015518] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.932 [2024-07-24 18:03:54.015531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.932 [2024-07-24 18:03:54.015599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.932 [2024-07-24 18:03:54.015653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.932 [2024-07-24 18:03:54.015689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.932 [2024-07-24 18:03:54.015693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 [2024-07-24 18:03:54.810611] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 Malloc0 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 [2024-07-24 18:03:54.887012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.865 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.866 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:08.866 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.866 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.866 [ 00:21:08.866 { 00:21:08.866 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:08.866 "subtype": "Discovery", 00:21:08.866 "listen_addresses": [ 00:21:08.866 { 00:21:08.866 "trtype": "TCP", 00:21:08.866 "adrfam": "IPv4", 00:21:08.866 "traddr": "10.0.0.2", 00:21:08.866 "trsvcid": "4420" 00:21:08.866 } 00:21:08.866 ], 00:21:08.866 "allow_any_host": true, 00:21:08.866 "hosts": [] 00:21:08.866 }, 00:21:08.866 { 00:21:08.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.866 "subtype": "NVMe", 00:21:08.866 "listen_addresses": [ 00:21:08.866 { 00:21:08.866 "trtype": "TCP", 00:21:08.866 "adrfam": "IPv4", 00:21:08.866 "traddr": "10.0.0.2", 00:21:08.866 "trsvcid": "4420" 00:21:08.866 } 00:21:08.866 ], 00:21:08.866 "allow_any_host": true, 00:21:08.866 "hosts": [], 00:21:08.866 "serial_number": "SPDK00000000000001", 00:21:08.866 "model_number": "SPDK bdev Controller", 00:21:08.866 "max_namespaces": 32, 00:21:08.866 "min_cntlid": 1, 00:21:08.866 "max_cntlid": 65519, 00:21:08.866 "namespaces": [ 00:21:08.866 { 00:21:08.866 "nsid": 1, 00:21:08.866 "bdev_name": "Malloc0", 00:21:08.866 "name": "Malloc0", 00:21:08.866 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:08.866 "eui64": "ABCDEF0123456789", 00:21:08.866 "uuid": "b60cae5e-ab9f-4334-b0e3-2ba9c47365f3" 00:21:08.866 } 00:21:08.866 ] 00:21:08.866 } 00:21:08.866 ] 00:21:08.866 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.866 18:03:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:08.866 [2024-07-24 18:03:54.929029] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:08.866 [2024-07-24 18:03:54.929076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840767 ] 00:21:08.866 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.866 [2024-07-24 18:03:54.962467] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:08.866 [2024-07-24 18:03:54.962528] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:08.866 [2024-07-24 18:03:54.962537] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:08.866 [2024-07-24 18:03:54.962552] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:08.866 [2024-07-24 18:03:54.962566] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:08.866 [2024-07-24 18:03:54.965153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:08.866 [2024-07-24 18:03:54.965205] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x139e540 0 00:21:08.866 [2024-07-24 18:03:54.973114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:08.866 [2024-07-24 18:03:54.973138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:08.866 [2024-07-24 18:03:54.973147] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:08.866 [2024-07-24 18:03:54.973157] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:08.866 [2024-07-24 18:03:54.973223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.973236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.973243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.973261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:08.866 [2024-07-24 18:03:54.973287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.866 [2024-07-24 18:03:54.981118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.866 [2024-07-24 18:03:54.981136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.866 [2024-07-24 18:03:54.981143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.866 [2024-07-24 18:03:54.981170] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:08.866 [2024-07-24 18:03:54.981181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:08.866 [2024-07-24 18:03:54.981191] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:08.866 [2024-07-24 18:03:54.981213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.981240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.866 [2024-07-24 18:03:54.981264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.866 [2024-07-24 18:03:54.981428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.866 [2024-07-24 18:03:54.981443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.866 [2024-07-24 18:03:54.981450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.866 [2024-07-24 18:03:54.981470] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:08.866 [2024-07-24 18:03:54.981484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:08.866 [2024-07-24 18:03:54.981496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.981521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.866 [2024-07-24 18:03:54.981542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.866 [2024-07-24 18:03:54.981666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.866 [2024-07-24 18:03:54.981677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.866 [2024-07-24 18:03:54.981684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.866 [2024-07-24 18:03:54.981699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:08.866 [2024-07-24 18:03:54.981713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:08.866 [2024-07-24 18:03:54.981731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.981756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.866 [2024-07-24 18:03:54.981776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.866 [2024-07-24 18:03:54.981886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.866 [2024-07-24 18:03:54.981898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.866 [2024-07-24 18:03:54.981904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.866 [2024-07-24 18:03:54.981920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:08.866 [2024-07-24 18:03:54.981935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.981950] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.981961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.866 [2024-07-24 18:03:54.981981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.866 [2024-07-24 18:03:54.982096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.866 [2024-07-24 18:03:54.982115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.866 [2024-07-24 18:03:54.982122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.982128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.866 [2024-07-24 18:03:54.982136] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:08.866 [2024-07-24 18:03:54.982145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:08.866 [2024-07-24 18:03:54.982158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:08.866 [2024-07-24 18:03:54.982268] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:08.866 [2024-07-24 18:03:54.982276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:08.866 [2024-07-24 18:03:54.982289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.982296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.866 [2024-07-24 18:03:54.982303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.866 [2024-07-24 18:03:54.982313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.866 [2024-07-24 18:03:54.982334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.867 [2024-07-24 18:03:54.982479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:54.982490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:54.982497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.982503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.867 [2024-07-24 18:03:54.982515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:08.867 [2024-07-24 18:03:54.982532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.982540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.982547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:54.982557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.867 [2024-07-24 18:03:54.982577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.867 [2024-07-24 18:03:54.982699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:54.982711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:54.982717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.982724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.867 [2024-07-24 18:03:54.982732] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:08.867 [2024-07-24 18:03:54.982740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:54.982753] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:08.867 [2024-07-24 18:03:54.982767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:54.982782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.982790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:54.982801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.867 [2024-07-24 18:03:54.982822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.867 [2024-07-24 18:03:54.982984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.867 [2024-07-24 18:03:54.982999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.867 [2024-07-24 18:03:54.983006] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.983013] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139e540): datao=0, datal=4096, cccid=0 00:21:08.867 [2024-07-24 18:03:54.983021] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13fe3c0) on tqpair(0x139e540): expected_datao=0, payload_size=4096 00:21:08.867 [2024-07-24 18:03:54.983028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.983046] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:54.983055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:55.024323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:55.024331] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.867 [2024-07-24 18:03:55.024350] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:08.867 [2024-07-24 18:03:55.024358] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:08.867 [2024-07-24 18:03:55.024366] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:08.867 [2024-07-24 18:03:55.024379] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:08.867 [2024-07-24 18:03:55.024388] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:08.867 [2024-07-24 18:03:55.024396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:55.024411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:55.024429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.867 [2024-07-24 18:03:55.024479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.867 [2024-07-24 18:03:55.024639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:55.024655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:55.024661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:08.867 [2024-07-24 18:03:55.024680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.867 [2024-07-24 18:03:55.024713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.867 [2024-07-24 18:03:55.024745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.867 [2024-07-24 18:03:55.024776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.867 [2024-07-24 18:03:55.024807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:55.024827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:08.867 [2024-07-24 18:03:55.024840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.024847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.024875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.867 [2024-07-24 18:03:55.024899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe3c0, cid 0, qid 0 00:21:08.867 [2024-07-24 18:03:55.024909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe540, cid 1, qid 0 00:21:08.867 [2024-07-24 18:03:55.024917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe6c0, cid 2, qid 0 00:21:08.867 [2024-07-24 18:03:55.024940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:08.867 [2024-07-24 18:03:55.024947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe9c0, cid 4, qid 0 00:21:08.867 [2024-07-24 18:03:55.029112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:55.029129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:55.029136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe9c0) on tqpair=0x139e540 00:21:08.867 [2024-07-24 18:03:55.029151] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:08.867 [2024-07-24 18:03:55.029160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:08.867 [2024-07-24 18:03:55.029193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139e540) 00:21:08.867 [2024-07-24 18:03:55.029214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.867 [2024-07-24 18:03:55.029236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe9c0, cid 4, qid 0 00:21:08.867 [2024-07-24 18:03:55.029431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.867 [2024-07-24 18:03:55.029443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.867 [2024-07-24 18:03:55.029450] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029456] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139e540): datao=0, datal=4096, cccid=4 00:21:08.867 [2024-07-24 18:03:55.029464] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13fe9c0) on tqpair(0x139e540): expected_datao=0, payload_size=4096 00:21:08.867 [2024-07-24 18:03:55.029471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029482] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029489] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.867 [2024-07-24 18:03:55.029511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.867 [2024-07-24 18:03:55.029521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.867 [2024-07-24 18:03:55.029528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe9c0) on tqpair=0x139e540 00:21:08.868 [2024-07-24 18:03:55.029552] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:08.868 [2024-07-24 18:03:55.029589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139e540) 00:21:08.868 [2024-07-24 18:03:55.029611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.868 [2024-07-24 18:03:55.029622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x139e540) 00:21:08.868 [2024-07-24 18:03:55.029649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.868 [2024-07-24 18:03:55.029676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe9c0, cid 4, qid 0 00:21:08.868 [2024-07-24 18:03:55.029687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13feb40, cid 5, qid 0 00:21:08.868 [2024-07-24 18:03:55.029861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.868 [2024-07-24 18:03:55.029873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.868 [2024-07-24 18:03:55.029880] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029886] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139e540): datao=0, datal=1024, cccid=4 00:21:08.868 [2024-07-24 18:03:55.029894] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13fe9c0) on tqpair(0x139e540): expected_datao=0, payload_size=1024 00:21:08.868 [2024-07-24 18:03:55.029901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029911] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029918] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.868 [2024-07-24 18:03:55.029935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.868 [2024-07-24 18:03:55.029942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.029948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13feb40) on tqpair=0x139e540 00:21:08.868 [2024-07-24 18:03:55.070244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.868 [2024-07-24 18:03:55.070262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.868 [2024-07-24 18:03:55.070270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.070277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe9c0) on tqpair=0x139e540 00:21:08.868 [2024-07-24 18:03:55.070294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.070304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139e540) 00:21:08.868 [2024-07-24 18:03:55.070315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.868 [2024-07-24 18:03:55.070344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe9c0, cid 4, qid 0 00:21:08.868 [2024-07-24 18:03:55.070485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.868 [2024-07-24 18:03:55.070497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.868 [2024-07-24 18:03:55.070504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.070510] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139e540): datao=0, datal=3072, cccid=4 00:21:08.868 [2024-07-24 18:03:55.070517] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13fe9c0) on tqpair(0x139e540): expected_datao=0, payload_size=3072 00:21:08.868 [2024-07-24 18:03:55.070525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.070545] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.070554] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.868 [2024-07-24 18:03:55.114141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.868 [2024-07-24 18:03:55.114163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe9c0) on tqpair=0x139e540 00:21:08.868 [2024-07-24 18:03:55.114186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x139e540) 00:21:08.868 [2024-07-24 18:03:55.114212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.868 [2024-07-24 18:03:55.114242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe9c0, cid 4, qid 0 00:21:08.868 [2024-07-24 18:03:55.114380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.868 [2024-07-24 18:03:55.114392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.868 [2024-07-24 18:03:55.114399] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114405] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x139e540): datao=0, datal=8, cccid=4 00:21:08.868 [2024-07-24 18:03:55.114413] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13fe9c0) on tqpair(0x139e540): expected_datao=0, payload_size=8 00:21:08.868 [2024-07-24 18:03:55.114420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114430] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.868 [2024-07-24 18:03:55.114437] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.129 [2024-07-24 18:03:55.155248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.129 [2024-07-24 18:03:55.155268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.129 [2024-07-24 18:03:55.155275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.129 [2024-07-24 18:03:55.155282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe9c0) on tqpair=0x139e540 00:21:09.129 ===================================================== 00:21:09.129 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:09.129 ===================================================== 00:21:09.129 Controller Capabilities/Features 00:21:09.129 ================================ 00:21:09.130 Vendor ID: 0000 00:21:09.130 Subsystem Vendor ID: 0000 00:21:09.130 Serial Number: .................... 00:21:09.130 Model Number: ........................................ 00:21:09.130 Firmware Version: 24.09 00:21:09.130 Recommended Arb Burst: 0 00:21:09.130 IEEE OUI Identifier: 00 00 00 00:21:09.130 Multi-path I/O 00:21:09.130 May have multiple subsystem ports: No 00:21:09.130 May have multiple controllers: No 00:21:09.130 Associated with SR-IOV VF: No 00:21:09.130 Max Data Transfer Size: 131072 00:21:09.130 Max Number of Namespaces: 0 00:21:09.130 Max Number of I/O Queues: 1024 00:21:09.130 NVMe Specification Version (VS): 1.3 00:21:09.130 NVMe Specification Version (Identify): 1.3 00:21:09.130 Maximum Queue Entries: 128 00:21:09.130 Contiguous Queues Required: Yes 00:21:09.130 Arbitration Mechanisms Supported 00:21:09.130 Weighted Round Robin: Not Supported 00:21:09.130 Vendor Specific: Not Supported 00:21:09.130 Reset Timeout: 15000 ms 00:21:09.130 Doorbell Stride: 4 bytes 00:21:09.130 NVM Subsystem Reset: Not Supported 00:21:09.130 Command Sets Supported 00:21:09.130 NVM Command Set: Supported 00:21:09.130 Boot Partition: Not Supported 00:21:09.130 Memory Page Size Minimum: 4096 bytes 00:21:09.130 Memory Page Size Maximum: 4096 bytes 00:21:09.130 Persistent Memory Region: Not Supported 00:21:09.130 Optional Asynchronous Events Supported 00:21:09.130 Namespace Attribute Notices: Not Supported 00:21:09.130 Firmware Activation Notices: Not Supported 00:21:09.130 ANA Change Notices: Not Supported 00:21:09.130 PLE Aggregate Log Change Notices: Not Supported 00:21:09.130 LBA Status Info Alert Notices: Not Supported 00:21:09.130 EGE Aggregate Log Change Notices: Not Supported 00:21:09.130 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.130 Zone Descriptor Change Notices: Not Supported 00:21:09.130 Discovery Log Change Notices: Supported 00:21:09.130 Controller Attributes 00:21:09.130 128-bit Host Identifier: Not Supported 00:21:09.130 Non-Operational Permissive Mode: Not Supported 00:21:09.130 NVM Sets: Not Supported 00:21:09.130 Read Recovery Levels: Not Supported 00:21:09.130 Endurance Groups: Not Supported 00:21:09.130 Predictable Latency Mode: Not Supported 00:21:09.130 Traffic Based Keep ALive: Not Supported 00:21:09.130 Namespace Granularity: Not Supported 00:21:09.130 SQ Associations: Not Supported 00:21:09.130 UUID List: Not Supported 00:21:09.130 Multi-Domain Subsystem: Not Supported 00:21:09.130 Fixed Capacity Management: Not Supported 00:21:09.130 Variable Capacity Management: Not Supported 00:21:09.130 Delete Endurance Group: Not Supported 00:21:09.130 Delete NVM Set: Not Supported 00:21:09.130 Extended LBA Formats Supported: Not Supported 00:21:09.130 Flexible Data Placement Supported: Not Supported 00:21:09.130 00:21:09.130 Controller Memory Buffer Support 00:21:09.130 ================================ 00:21:09.130 Supported: No 00:21:09.130 00:21:09.130 Persistent Memory Region Support 00:21:09.130 ================================ 00:21:09.130 Supported: No 00:21:09.130 00:21:09.130 Admin Command Set Attributes 00:21:09.130 ============================ 00:21:09.130 Security Send/Receive: Not Supported 00:21:09.130 Format NVM: Not Supported 00:21:09.130 Firmware Activate/Download: Not Supported 00:21:09.130 Namespace Management: Not Supported 00:21:09.130 Device Self-Test: Not Supported 00:21:09.130 Directives: Not Supported 00:21:09.130 NVMe-MI: Not Supported 00:21:09.130 Virtualization Management: Not Supported 00:21:09.130 Doorbell Buffer Config: Not Supported 00:21:09.130 Get LBA Status Capability: Not Supported 00:21:09.130 Command & Feature Lockdown Capability: Not Supported 00:21:09.130 Abort Command Limit: 1 00:21:09.130 Async Event Request Limit: 4 00:21:09.130 Number of Firmware Slots: N/A 00:21:09.130 Firmware Slot 1 Read-Only: N/A 00:21:09.130 Firmware Activation Without Reset: N/A 00:21:09.130 Multiple Update Detection Support: N/A 00:21:09.130 Firmware Update Granularity: No Information Provided 00:21:09.130 Per-Namespace SMART Log: No 00:21:09.130 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.130 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:09.130 Command Effects Log Page: Not Supported 00:21:09.130 Get Log Page Extended Data: Supported 00:21:09.130 Telemetry Log Pages: Not Supported 00:21:09.130 Persistent Event Log Pages: Not Supported 00:21:09.130 Supported Log Pages Log Page: May Support 00:21:09.130 Commands Supported & Effects Log Page: Not Supported 00:21:09.130 Feature Identifiers & Effects Log Page:May Support 00:21:09.130 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.130 Data Area 4 for Telemetry Log: Not Supported 00:21:09.130 Error Log Page Entries Supported: 128 00:21:09.130 Keep Alive: Not Supported 00:21:09.130 00:21:09.130 NVM Command Set Attributes 00:21:09.130 ========================== 00:21:09.130 Submission Queue Entry Size 00:21:09.130 Max: 1 00:21:09.130 Min: 1 00:21:09.130 Completion Queue Entry Size 00:21:09.130 Max: 1 00:21:09.130 Min: 1 00:21:09.130 Number of Namespaces: 0 00:21:09.130 Compare Command: Not Supported 00:21:09.130 Write Uncorrectable Command: Not Supported 00:21:09.130 Dataset Management Command: Not Supported 00:21:09.130 Write Zeroes Command: Not Supported 00:21:09.130 Set Features Save Field: Not Supported 00:21:09.130 Reservations: Not Supported 00:21:09.130 Timestamp: Not Supported 00:21:09.130 Copy: Not Supported 00:21:09.130 Volatile Write Cache: Not Present 00:21:09.130 Atomic Write Unit (Normal): 1 00:21:09.130 Atomic Write Unit (PFail): 1 00:21:09.130 Atomic Compare & Write Unit: 1 00:21:09.130 Fused Compare & Write: Supported 00:21:09.130 Scatter-Gather List 00:21:09.130 SGL Command Set: Supported 00:21:09.130 SGL Keyed: Supported 00:21:09.130 SGL Bit Bucket Descriptor: Not Supported 00:21:09.130 SGL Metadata Pointer: Not Supported 00:21:09.130 Oversized SGL: Not Supported 00:21:09.130 SGL Metadata Address: Not Supported 00:21:09.130 SGL Offset: Supported 00:21:09.130 Transport SGL Data Block: Not Supported 00:21:09.130 Replay Protected Memory Block: Not Supported 00:21:09.130 00:21:09.130 Firmware Slot Information 00:21:09.130 ========================= 00:21:09.130 Active slot: 0 00:21:09.130 00:21:09.130 00:21:09.130 Error Log 00:21:09.130 ========= 00:21:09.130 00:21:09.130 Active Namespaces 00:21:09.130 ================= 00:21:09.130 Discovery Log Page 00:21:09.130 ================== 00:21:09.130 Generation Counter: 2 00:21:09.130 Number of Records: 2 00:21:09.130 Record Format: 0 00:21:09.130 00:21:09.130 Discovery Log Entry 0 00:21:09.130 ---------------------- 00:21:09.130 Transport Type: 3 (TCP) 00:21:09.130 Address Family: 1 (IPv4) 00:21:09.130 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:09.130 Entry Flags: 00:21:09.130 Duplicate Returned Information: 1 00:21:09.130 Explicit Persistent Connection Support for Discovery: 1 00:21:09.130 Transport Requirements: 00:21:09.130 Secure Channel: Not Required 00:21:09.130 Port ID: 0 (0x0000) 00:21:09.130 Controller ID: 65535 (0xffff) 00:21:09.130 Admin Max SQ Size: 128 00:21:09.130 Transport Service Identifier: 4420 00:21:09.130 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:09.130 Transport Address: 10.0.0.2 00:21:09.130 Discovery Log Entry 1 00:21:09.130 ---------------------- 00:21:09.130 Transport Type: 3 (TCP) 00:21:09.130 Address Family: 1 (IPv4) 00:21:09.130 Subsystem Type: 2 (NVM Subsystem) 00:21:09.130 Entry Flags: 00:21:09.130 Duplicate Returned Information: 0 00:21:09.130 Explicit Persistent Connection Support for Discovery: 0 00:21:09.130 Transport Requirements: 00:21:09.130 Secure Channel: Not Required 00:21:09.130 Port ID: 0 (0x0000) 00:21:09.130 Controller ID: 65535 (0xffff) 00:21:09.130 Admin Max SQ Size: 128 00:21:09.130 Transport Service Identifier: 4420 00:21:09.130 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:09.130 Transport Address: 10.0.0.2 [2024-07-24 18:03:55.155390] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:09.130 [2024-07-24 18:03:55.155411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe3c0) on tqpair=0x139e540 00:21:09.130 [2024-07-24 18:03:55.155423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.130 [2024-07-24 18:03:55.155432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe540) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.155440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.131 [2024-07-24 18:03:55.155448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe6c0) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.155456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.131 [2024-07-24 18:03:55.155464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.155471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.131 [2024-07-24 18:03:55.155489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.155498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.155505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.155516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.155556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.155732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.155748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.155754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.155761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.155773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.155784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.155791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.155802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.155830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.155981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.155993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.155999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.156014] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:09.131 [2024-07-24 18:03:55.156023] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:09.131 [2024-07-24 18:03:55.156038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156047] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.156064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.156084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.156212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.156228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.156234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.156258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.156284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.156305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.156418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.156433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.156439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.156462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.156488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.156508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.156627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.156638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.156645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.156672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.156698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.156718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.156833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.156845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.156851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.156873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.156889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.156899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.156919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.157041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.157053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.157059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.157081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.157116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.157137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.157252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.157267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.157274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.157297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.157323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.157343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.157460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.157472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.157478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.157505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.131 [2024-07-24 18:03:55.157531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.131 [2024-07-24 18:03:55.157551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.131 [2024-07-24 18:03:55.157671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.131 [2024-07-24 18:03:55.157685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.131 [2024-07-24 18:03:55.157692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.131 [2024-07-24 18:03:55.157698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.131 [2024-07-24 18:03:55.157715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.157724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.157730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.132 [2024-07-24 18:03:55.157741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.157761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.132 [2024-07-24 18:03:55.157879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.157894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.157901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.157907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.132 [2024-07-24 18:03:55.157924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.157933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.157939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.132 [2024-07-24 18:03:55.157949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.157970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.132 [2024-07-24 18:03:55.158084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.158098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.162117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.162125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.132 [2024-07-24 18:03:55.162144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.162168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.162175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x139e540) 00:21:09.132 [2024-07-24 18:03:55.162186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.162209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13fe840, cid 3, qid 0 00:21:09.132 [2024-07-24 18:03:55.162363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.162375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.162381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.162388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13fe840) on tqpair=0x139e540 00:21:09.132 [2024-07-24 18:03:55.162401] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:09.132 00:21:09.132 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:09.132 [2024-07-24 18:03:55.197368] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:09.132 [2024-07-24 18:03:55.197424] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840774 ] 00:21:09.132 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.132 [2024-07-24 18:03:55.228804] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:09.132 [2024-07-24 18:03:55.228850] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:09.132 [2024-07-24 18:03:55.228860] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:09.132 [2024-07-24 18:03:55.228872] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:09.132 [2024-07-24 18:03:55.228884] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:09.132 [2024-07-24 18:03:55.232150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:09.132 [2024-07-24 18:03:55.232186] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7b4540 0 00:21:09.132 [2024-07-24 18:03:55.239110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:09.132 [2024-07-24 18:03:55.239135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:09.132 [2024-07-24 18:03:55.239144] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:09.132 [2024-07-24 18:03:55.239151] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:09.132 [2024-07-24 18:03:55.239189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.239201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.239208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.132 [2024-07-24 18:03:55.239222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:09.132 [2024-07-24 18:03:55.239249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.132 [2024-07-24 18:03:55.247114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.247132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.247139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.132 [2024-07-24 18:03:55.247160] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:09.132 [2024-07-24 18:03:55.247171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:09.132 [2024-07-24 18:03:55.247181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:09.132 [2024-07-24 18:03:55.247201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.132 [2024-07-24 18:03:55.247228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.247256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.132 [2024-07-24 18:03:55.247403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.247416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.247422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.132 [2024-07-24 18:03:55.247441] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:09.132 [2024-07-24 18:03:55.247455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:09.132 [2024-07-24 18:03:55.247467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.132 [2024-07-24 18:03:55.247491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.247512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.132 [2024-07-24 18:03:55.247630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.247642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.247649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.132 [2024-07-24 18:03:55.247664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:09.132 [2024-07-24 18:03:55.247678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:09.132 [2024-07-24 18:03:55.247690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.132 [2024-07-24 18:03:55.247714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.247735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.132 [2024-07-24 18:03:55.247847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.247859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.247865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.132 [2024-07-24 18:03:55.247880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:09.132 [2024-07-24 18:03:55.247896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.247911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.132 [2024-07-24 18:03:55.247922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.132 [2024-07-24 18:03:55.247942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.132 [2024-07-24 18:03:55.248064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.132 [2024-07-24 18:03:55.248084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.132 [2024-07-24 18:03:55.248091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.132 [2024-07-24 18:03:55.248098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.132 [2024-07-24 18:03:55.248114] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:09.132 [2024-07-24 18:03:55.248123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:09.132 [2024-07-24 18:03:55.248137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:09.133 [2024-07-24 18:03:55.248247] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:09.133 [2024-07-24 18:03:55.248254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:09.133 [2024-07-24 18:03:55.248265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.248291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.133 [2024-07-24 18:03:55.248312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.133 [2024-07-24 18:03:55.248460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.248475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.133 [2024-07-24 18:03:55.248482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.133 [2024-07-24 18:03:55.248497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:09.133 [2024-07-24 18:03:55.248513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.248539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.133 [2024-07-24 18:03:55.248560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.133 [2024-07-24 18:03:55.248676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.248688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.133 [2024-07-24 18:03:55.248695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.133 [2024-07-24 18:03:55.248709] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:09.133 [2024-07-24 18:03:55.248718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.248731] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:09.133 [2024-07-24 18:03:55.248748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.248762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.248784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.133 [2024-07-24 18:03:55.248807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.133 [2024-07-24 18:03:55.248951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.133 [2024-07-24 18:03:55.248966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.133 [2024-07-24 18:03:55.248973] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.248979] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=4096, cccid=0 00:21:09.133 [2024-07-24 18:03:55.248987] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8143c0) on tqpair(0x7b4540): expected_datao=0, payload_size=4096 00:21:09.133 [2024-07-24 18:03:55.248994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249012] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.249100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.133 [2024-07-24 18:03:55.249115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.133 [2024-07-24 18:03:55.249132] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:09.133 [2024-07-24 18:03:55.249141] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:09.133 [2024-07-24 18:03:55.249148] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:09.133 [2024-07-24 18:03:55.249155] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:09.133 [2024-07-24 18:03:55.249162] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:09.133 [2024-07-24 18:03:55.249170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.133 [2024-07-24 18:03:55.249248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.133 [2024-07-24 18:03:55.249372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.249387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.133 [2024-07-24 18:03:55.249394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.133 [2024-07-24 18:03:55.249411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.133 [2024-07-24 18:03:55.249449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.133 [2024-07-24 18:03:55.249481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.133 [2024-07-24 18:03:55.249513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.133 [2024-07-24 18:03:55.249559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.133 [2024-07-24 18:03:55.249629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8143c0, cid 0, qid 0 00:21:09.133 [2024-07-24 18:03:55.249655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814540, cid 1, qid 0 00:21:09.133 [2024-07-24 18:03:55.249663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8146c0, cid 2, qid 0 00:21:09.133 [2024-07-24 18:03:55.249670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814840, cid 3, qid 0 00:21:09.133 [2024-07-24 18:03:55.249678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.133 [2024-07-24 18:03:55.249846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.249858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.133 [2024-07-24 18:03:55.249865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.133 [2024-07-24 18:03:55.249880] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:09.133 [2024-07-24 18:03:55.249888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:09.133 [2024-07-24 18:03:55.249928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.133 [2024-07-24 18:03:55.249942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.133 [2024-07-24 18:03:55.249970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.133 [2024-07-24 18:03:55.249992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.133 [2024-07-24 18:03:55.250154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.133 [2024-07-24 18:03:55.250168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.134 [2024-07-24 18:03:55.250175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.250182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.134 [2024-07-24 18:03:55.250250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.250270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.250284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.250292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.134 [2024-07-24 18:03:55.250302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.134 [2024-07-24 18:03:55.250324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.134 [2024-07-24 18:03:55.254114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.134 [2024-07-24 18:03:55.254130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.134 [2024-07-24 18:03:55.254137] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.254144] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=4096, cccid=4 00:21:09.134 [2024-07-24 18:03:55.254151] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8149c0) on tqpair(0x7b4540): expected_datao=0, payload_size=4096 00:21:09.134 [2024-07-24 18:03:55.254159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.254169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.254176] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.134 [2024-07-24 18:03:55.294133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.134 [2024-07-24 18:03:55.294140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.134 [2024-07-24 18:03:55.294162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:09.134 [2024-07-24 18:03:55.294179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.294196] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.294225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.134 [2024-07-24 18:03:55.294245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.134 [2024-07-24 18:03:55.294268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.134 [2024-07-24 18:03:55.294467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.134 [2024-07-24 18:03:55.294479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.134 [2024-07-24 18:03:55.294490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=4096, cccid=4 00:21:09.134 [2024-07-24 18:03:55.294505] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8149c0) on tqpair(0x7b4540): expected_datao=0, payload_size=4096 00:21:09.134 [2024-07-24 18:03:55.294512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294530] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.134 [2024-07-24 18:03:55.294562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.134 [2024-07-24 18:03:55.294569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.134 [2024-07-24 18:03:55.294597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.294615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.294629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.134 [2024-07-24 18:03:55.294648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.134 [2024-07-24 18:03:55.294669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.134 [2024-07-24 18:03:55.294803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.134 [2024-07-24 18:03:55.294818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.134 [2024-07-24 18:03:55.294825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294831] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=4096, cccid=4 00:21:09.134 [2024-07-24 18:03:55.294839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8149c0) on tqpair(0x7b4540): expected_datao=0, payload_size=4096 00:21:09.134 [2024-07-24 18:03:55.294846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294864] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.294873] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.335236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.134 [2024-07-24 18:03:55.335254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.134 [2024-07-24 18:03:55.335261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.335268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.134 [2024-07-24 18:03:55.335281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335354] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:09.134 [2024-07-24 18:03:55.335361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:09.134 [2024-07-24 18:03:55.335370] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:09.134 [2024-07-24 18:03:55.335389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.335397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.134 [2024-07-24 18:03:55.335409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.134 [2024-07-24 18:03:55.335420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.335427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.134 [2024-07-24 18:03:55.335433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b4540) 00:21:09.134 [2024-07-24 18:03:55.335443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.134 [2024-07-24 18:03:55.335469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.134 [2024-07-24 18:03:55.335481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 5, qid 0 00:21:09.134 [2024-07-24 18:03:55.335604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.335616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.335623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.335629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.335639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.335648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.335654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.335661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.335676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.335685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.335696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.335716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 5, qid 0 00:21:09.135 [2024-07-24 18:03:55.335837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.335850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.335856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.335863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.335878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.335887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.335898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.335918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 5, qid 0 00:21:09.135 [2024-07-24 18:03:55.336029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.336042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.336052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.336075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.336094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.336123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 5, qid 0 00:21:09.135 [2024-07-24 18:03:55.336238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.336250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.336257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.336287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.336309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.336322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.336339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.336351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.336368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.336380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7b4540) 00:21:09.135 [2024-07-24 18:03:55.336412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.135 [2024-07-24 18:03:55.336434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814b40, cid 5, qid 0 00:21:09.135 [2024-07-24 18:03:55.336444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8149c0, cid 4, qid 0 00:21:09.135 [2024-07-24 18:03:55.336452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814cc0, cid 6, qid 0 00:21:09.135 [2024-07-24 18:03:55.336459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 7, qid 0 00:21:09.135 [2024-07-24 18:03:55.336725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.135 [2024-07-24 18:03:55.336738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.135 [2024-07-24 18:03:55.336745] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336751] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=8192, cccid=5 00:21:09.135 [2024-07-24 18:03:55.336759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814b40) on tqpair(0x7b4540): expected_datao=0, payload_size=8192 00:21:09.135 [2024-07-24 18:03:55.336766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336787] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336801] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.135 [2024-07-24 18:03:55.336819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.135 [2024-07-24 18:03:55.336825] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=512, cccid=4 00:21:09.135 [2024-07-24 18:03:55.336840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8149c0) on tqpair(0x7b4540): expected_datao=0, payload_size=512 00:21:09.135 [2024-07-24 18:03:55.336847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336856] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336863] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.135 [2024-07-24 18:03:55.336880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.135 [2024-07-24 18:03:55.336887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336893] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=512, cccid=6 00:21:09.135 [2024-07-24 18:03:55.336901] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814cc0) on tqpair(0x7b4540): expected_datao=0, payload_size=512 00:21:09.135 [2024-07-24 18:03:55.336908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336917] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336924] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.135 [2024-07-24 18:03:55.336941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.135 [2024-07-24 18:03:55.336948] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336954] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b4540): datao=0, datal=4096, cccid=7 00:21:09.135 [2024-07-24 18:03:55.336962] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x814e40) on tqpair(0x7b4540): expected_datao=0, payload_size=4096 00:21:09.135 [2024-07-24 18:03:55.336984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.336996] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337003] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.337025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.337032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814b40) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.337072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.337084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.337090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8149c0) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.337134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.337146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.337152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814cc0) on tqpair=0x7b4540 00:21:09.135 [2024-07-24 18:03:55.337170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.135 [2024-07-24 18:03:55.337179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.135 [2024-07-24 18:03:55.337189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.135 [2024-07-24 18:03:55.337196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7b4540 00:21:09.135 ===================================================== 00:21:09.135 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.135 ===================================================== 00:21:09.135 Controller Capabilities/Features 00:21:09.135 ================================ 00:21:09.135 Vendor ID: 8086 00:21:09.135 Subsystem Vendor ID: 8086 00:21:09.135 Serial Number: SPDK00000000000001 00:21:09.135 Model Number: SPDK bdev Controller 00:21:09.135 Firmware Version: 24.09 00:21:09.135 Recommended Arb Burst: 6 00:21:09.135 IEEE OUI Identifier: e4 d2 5c 00:21:09.135 Multi-path I/O 00:21:09.135 May have multiple subsystem ports: Yes 00:21:09.135 May have multiple controllers: Yes 00:21:09.135 Associated with SR-IOV VF: No 00:21:09.135 Max Data Transfer Size: 131072 00:21:09.135 Max Number of Namespaces: 32 00:21:09.136 Max Number of I/O Queues: 127 00:21:09.136 NVMe Specification Version (VS): 1.3 00:21:09.136 NVMe Specification Version (Identify): 1.3 00:21:09.136 Maximum Queue Entries: 128 00:21:09.136 Contiguous Queues Required: Yes 00:21:09.136 Arbitration Mechanisms Supported 00:21:09.136 Weighted Round Robin: Not Supported 00:21:09.136 Vendor Specific: Not Supported 00:21:09.136 Reset Timeout: 15000 ms 00:21:09.136 Doorbell Stride: 4 bytes 00:21:09.136 NVM Subsystem Reset: Not Supported 00:21:09.136 Command Sets Supported 00:21:09.136 NVM Command Set: Supported 00:21:09.136 Boot Partition: Not Supported 00:21:09.136 Memory Page Size Minimum: 4096 bytes 00:21:09.136 Memory Page Size Maximum: 4096 bytes 00:21:09.136 Persistent Memory Region: Not Supported 00:21:09.136 Optional Asynchronous Events Supported 00:21:09.136 Namespace Attribute Notices: Supported 00:21:09.136 Firmware Activation Notices: Not Supported 00:21:09.136 ANA Change Notices: Not Supported 00:21:09.136 PLE Aggregate Log Change Notices: Not Supported 00:21:09.136 LBA Status Info Alert Notices: Not Supported 00:21:09.136 EGE Aggregate Log Change Notices: Not Supported 00:21:09.136 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.136 Zone Descriptor Change Notices: Not Supported 00:21:09.136 Discovery Log Change Notices: Not Supported 00:21:09.136 Controller Attributes 00:21:09.136 128-bit Host Identifier: Supported 00:21:09.136 Non-Operational Permissive Mode: Not Supported 00:21:09.136 NVM Sets: Not Supported 00:21:09.136 Read Recovery Levels: Not Supported 00:21:09.136 Endurance Groups: Not Supported 00:21:09.136 Predictable Latency Mode: Not Supported 00:21:09.136 Traffic Based Keep ALive: Not Supported 00:21:09.136 Namespace Granularity: Not Supported 00:21:09.136 SQ Associations: Not Supported 00:21:09.136 UUID List: Not Supported 00:21:09.136 Multi-Domain Subsystem: Not Supported 00:21:09.136 Fixed Capacity Management: Not Supported 00:21:09.136 Variable Capacity Management: Not Supported 00:21:09.136 Delete Endurance Group: Not Supported 00:21:09.136 Delete NVM Set: Not Supported 00:21:09.136 Extended LBA Formats Supported: Not Supported 00:21:09.136 Flexible Data Placement Supported: Not Supported 00:21:09.136 00:21:09.136 Controller Memory Buffer Support 00:21:09.136 ================================ 00:21:09.136 Supported: No 00:21:09.136 00:21:09.136 Persistent Memory Region Support 00:21:09.136 ================================ 00:21:09.136 Supported: No 00:21:09.136 00:21:09.136 Admin Command Set Attributes 00:21:09.136 ============================ 00:21:09.136 Security Send/Receive: Not Supported 00:21:09.136 Format NVM: Not Supported 00:21:09.136 Firmware Activate/Download: Not Supported 00:21:09.136 Namespace Management: Not Supported 00:21:09.136 Device Self-Test: Not Supported 00:21:09.136 Directives: Not Supported 00:21:09.136 NVMe-MI: Not Supported 00:21:09.136 Virtualization Management: Not Supported 00:21:09.136 Doorbell Buffer Config: Not Supported 00:21:09.136 Get LBA Status Capability: Not Supported 00:21:09.136 Command & Feature Lockdown Capability: Not Supported 00:21:09.136 Abort Command Limit: 4 00:21:09.136 Async Event Request Limit: 4 00:21:09.136 Number of Firmware Slots: N/A 00:21:09.136 Firmware Slot 1 Read-Only: N/A 00:21:09.136 Firmware Activation Without Reset: N/A 00:21:09.136 Multiple Update Detection Support: N/A 00:21:09.136 Firmware Update Granularity: No Information Provided 00:21:09.136 Per-Namespace SMART Log: No 00:21:09.136 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.136 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:09.136 Command Effects Log Page: Supported 00:21:09.136 Get Log Page Extended Data: Supported 00:21:09.136 Telemetry Log Pages: Not Supported 00:21:09.136 Persistent Event Log Pages: Not Supported 00:21:09.136 Supported Log Pages Log Page: May Support 00:21:09.136 Commands Supported & Effects Log Page: Not Supported 00:21:09.136 Feature Identifiers & Effects Log Page:May Support 00:21:09.136 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.136 Data Area 4 for Telemetry Log: Not Supported 00:21:09.136 Error Log Page Entries Supported: 128 00:21:09.136 Keep Alive: Supported 00:21:09.136 Keep Alive Granularity: 10000 ms 00:21:09.136 00:21:09.136 NVM Command Set Attributes 00:21:09.136 ========================== 00:21:09.136 Submission Queue Entry Size 00:21:09.136 Max: 64 00:21:09.136 Min: 64 00:21:09.136 Completion Queue Entry Size 00:21:09.136 Max: 16 00:21:09.136 Min: 16 00:21:09.136 Number of Namespaces: 32 00:21:09.136 Compare Command: Supported 00:21:09.136 Write Uncorrectable Command: Not Supported 00:21:09.136 Dataset Management Command: Supported 00:21:09.136 Write Zeroes Command: Supported 00:21:09.136 Set Features Save Field: Not Supported 00:21:09.136 Reservations: Supported 00:21:09.136 Timestamp: Not Supported 00:21:09.136 Copy: Supported 00:21:09.136 Volatile Write Cache: Present 00:21:09.136 Atomic Write Unit (Normal): 1 00:21:09.136 Atomic Write Unit (PFail): 1 00:21:09.136 Atomic Compare & Write Unit: 1 00:21:09.136 Fused Compare & Write: Supported 00:21:09.136 Scatter-Gather List 00:21:09.136 SGL Command Set: Supported 00:21:09.136 SGL Keyed: Supported 00:21:09.136 SGL Bit Bucket Descriptor: Not Supported 00:21:09.136 SGL Metadata Pointer: Not Supported 00:21:09.136 Oversized SGL: Not Supported 00:21:09.136 SGL Metadata Address: Not Supported 00:21:09.136 SGL Offset: Supported 00:21:09.136 Transport SGL Data Block: Not Supported 00:21:09.136 Replay Protected Memory Block: Not Supported 00:21:09.136 00:21:09.136 Firmware Slot Information 00:21:09.136 ========================= 00:21:09.136 Active slot: 1 00:21:09.136 Slot 1 Firmware Revision: 24.09 00:21:09.136 00:21:09.136 00:21:09.136 Commands Supported and Effects 00:21:09.136 ============================== 00:21:09.136 Admin Commands 00:21:09.136 -------------- 00:21:09.136 Get Log Page (02h): Supported 00:21:09.136 Identify (06h): Supported 00:21:09.136 Abort (08h): Supported 00:21:09.136 Set Features (09h): Supported 00:21:09.136 Get Features (0Ah): Supported 00:21:09.136 Asynchronous Event Request (0Ch): Supported 00:21:09.136 Keep Alive (18h): Supported 00:21:09.136 I/O Commands 00:21:09.136 ------------ 00:21:09.136 Flush (00h): Supported LBA-Change 00:21:09.136 Write (01h): Supported LBA-Change 00:21:09.136 Read (02h): Supported 00:21:09.136 Compare (05h): Supported 00:21:09.136 Write Zeroes (08h): Supported LBA-Change 00:21:09.136 Dataset Management (09h): Supported LBA-Change 00:21:09.136 Copy (19h): Supported LBA-Change 00:21:09.136 00:21:09.136 Error Log 00:21:09.136 ========= 00:21:09.136 00:21:09.136 Arbitration 00:21:09.136 =========== 00:21:09.136 Arbitration Burst: 1 00:21:09.136 00:21:09.136 Power Management 00:21:09.136 ================ 00:21:09.136 Number of Power States: 1 00:21:09.136 Current Power State: Power State #0 00:21:09.136 Power State #0: 00:21:09.136 Max Power: 0.00 W 00:21:09.136 Non-Operational State: Operational 00:21:09.136 Entry Latency: Not Reported 00:21:09.136 Exit Latency: Not Reported 00:21:09.136 Relative Read Throughput: 0 00:21:09.136 Relative Read Latency: 0 00:21:09.136 Relative Write Throughput: 0 00:21:09.136 Relative Write Latency: 0 00:21:09.136 Idle Power: Not Reported 00:21:09.136 Active Power: Not Reported 00:21:09.136 Non-Operational Permissive Mode: Not Supported 00:21:09.136 00:21:09.136 Health Information 00:21:09.136 ================== 00:21:09.136 Critical Warnings: 00:21:09.136 Available Spare Space: OK 00:21:09.136 Temperature: OK 00:21:09.136 Device Reliability: OK 00:21:09.136 Read Only: No 00:21:09.136 Volatile Memory Backup: OK 00:21:09.136 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:09.136 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:09.136 Available Spare: 0% 00:21:09.136 Available Spare Threshold: 0% 00:21:09.136 Life Percentage Used:[2024-07-24 18:03:55.337316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.136 [2024-07-24 18:03:55.337328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7b4540) 00:21:09.136 [2024-07-24 18:03:55.337339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.136 [2024-07-24 18:03:55.337362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814e40, cid 7, qid 0 00:21:09.136 [2024-07-24 18:03:55.337521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.136 [2024-07-24 18:03:55.337533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.136 [2024-07-24 18:03:55.337540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.136 [2024-07-24 18:03:55.337547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814e40) on tqpair=0x7b4540 00:21:09.136 [2024-07-24 18:03:55.337587] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:09.137 [2024-07-24 18:03:55.337606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8143c0) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.337616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.137 [2024-07-24 18:03:55.337625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814540) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.337633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.137 [2024-07-24 18:03:55.337641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8146c0) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.337648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.137 [2024-07-24 18:03:55.337657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814840) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.337664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.137 [2024-07-24 18:03:55.337692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.337700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.337706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b4540) 00:21:09.137 [2024-07-24 18:03:55.337716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.137 [2024-07-24 18:03:55.337738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814840, cid 3, qid 0 00:21:09.137 [2024-07-24 18:03:55.337887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.137 [2024-07-24 18:03:55.337899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.137 [2024-07-24 18:03:55.337906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.337913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814840) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.337924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.337931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.337938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b4540) 00:21:09.137 [2024-07-24 18:03:55.337948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.137 [2024-07-24 18:03:55.337974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814840, cid 3, qid 0 00:21:09.137 [2024-07-24 18:03:55.342124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.137 [2024-07-24 18:03:55.342141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.137 [2024-07-24 18:03:55.342152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.342159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814840) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.342166] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:09.137 [2024-07-24 18:03:55.342174] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:09.137 [2024-07-24 18:03:55.342206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.342216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.342222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b4540) 00:21:09.137 [2024-07-24 18:03:55.342233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.137 [2024-07-24 18:03:55.342256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x814840, cid 3, qid 0 00:21:09.137 [2024-07-24 18:03:55.342413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.137 [2024-07-24 18:03:55.342425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.137 [2024-07-24 18:03:55.342432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.137 [2024-07-24 18:03:55.342439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x814840) on tqpair=0x7b4540 00:21:09.137 [2024-07-24 18:03:55.342451] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:21:09.137 0% 00:21:09.137 Data Units Read: 0 00:21:09.137 Data Units Written: 0 00:21:09.137 Host Read Commands: 0 00:21:09.137 Host Write Commands: 0 00:21:09.137 Controller Busy Time: 0 minutes 00:21:09.137 Power Cycles: 0 00:21:09.137 Power On Hours: 0 hours 00:21:09.137 Unsafe Shutdowns: 0 00:21:09.137 Unrecoverable Media Errors: 0 00:21:09.137 Lifetime Error Log Entries: 0 00:21:09.137 Warning Temperature Time: 0 minutes 00:21:09.137 Critical Temperature Time: 0 minutes 00:21:09.137 00:21:09.137 Number of Queues 00:21:09.137 ================ 00:21:09.137 Number of I/O Submission Queues: 127 00:21:09.137 Number of I/O Completion Queues: 127 00:21:09.137 00:21:09.137 Active Namespaces 00:21:09.137 ================= 00:21:09.137 Namespace ID:1 00:21:09.137 Error Recovery Timeout: Unlimited 00:21:09.137 Command Set Identifier: NVM (00h) 00:21:09.137 Deallocate: Supported 00:21:09.137 Deallocated/Unwritten Error: Not Supported 00:21:09.137 Deallocated Read Value: Unknown 00:21:09.137 Deallocate in Write Zeroes: Not Supported 00:21:09.137 Deallocated Guard Field: 0xFFFF 00:21:09.137 Flush: Supported 00:21:09.137 Reservation: Supported 00:21:09.137 Namespace Sharing Capabilities: Multiple Controllers 00:21:09.137 Size (in LBAs): 131072 (0GiB) 00:21:09.137 Capacity (in LBAs): 131072 (0GiB) 00:21:09.137 Utilization (in LBAs): 131072 (0GiB) 00:21:09.137 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:09.137 EUI64: ABCDEF0123456789 00:21:09.137 UUID: b60cae5e-ab9f-4334-b0e3-2ba9c47365f3 00:21:09.137 Thin Provisioning: Not Supported 00:21:09.137 Per-NS Atomic Units: Yes 00:21:09.137 Atomic Boundary Size (Normal): 0 00:21:09.137 Atomic Boundary Size (PFail): 0 00:21:09.137 Atomic Boundary Offset: 0 00:21:09.137 Maximum Single Source Range Length: 65535 00:21:09.137 Maximum Copy Length: 65535 00:21:09.137 Maximum Source Range Count: 1 00:21:09.137 NGUID/EUI64 Never Reused: No 00:21:09.137 Namespace Write Protected: No 00:21:09.137 Number of LBA Formats: 1 00:21:09.137 Current LBA Format: LBA Format #00 00:21:09.137 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:09.137 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.137 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.137 rmmod nvme_tcp 00:21:09.137 rmmod nvme_fabrics 00:21:09.396 rmmod nvme_keyring 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2840614 ']' 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2840614 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2840614 ']' 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2840614 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2840614 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2840614' 00:21:09.396 killing process with pid 2840614 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2840614 00:21:09.396 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2840614 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.653 18:03:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:11.557 00:21:11.557 real 0m6.145s 00:21:11.557 user 0m7.566s 00:21:11.557 sys 0m1.899s 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.557 ************************************ 00:21:11.557 END TEST nvmf_identify 00:21:11.557 ************************************ 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.557 18:03:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.815 ************************************ 00:21:11.815 START TEST nvmf_perf 00:21:11.815 ************************************ 00:21:11.815 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:11.815 * Looking for test storage... 00:21:11.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.816 18:03:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:13.720 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:13.720 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:13.721 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:13.721 Found net devices under 0000:09:00.0: cvl_0_0 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:13.721 Found net devices under 0000:09:00.1: cvl_0_1 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:21:13.721 00:21:13.721 --- 10.0.0.2 ping statistics --- 00:21:13.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.721 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:21:13.721 00:21:13.721 --- 10.0.0.1 ping statistics --- 00:21:13.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.721 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2842701 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2842701 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2842701 ']' 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.721 18:03:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.721 [2024-07-24 18:03:59.972383] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:13.721 [2024-07-24 18:03:59.972447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.982 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.982 [2024-07-24 18:04:00.042009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.982 [2024-07-24 18:04:00.162502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.982 [2024-07-24 18:04:00.162563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.982 [2024-07-24 18:04:00.162580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.982 [2024-07-24 18:04:00.162593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.982 [2024-07-24 18:04:00.162606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.982 [2024-07-24 18:04:00.162701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.982 [2024-07-24 18:04:00.162768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.982 [2024-07-24 18:04:00.162862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.982 [2024-07-24 18:04:00.162865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:14.239 18:04:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:17.514 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:17.514 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:17.514 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:21:17.514 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:17.771 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:17.771 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:21:17.771 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:17.771 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:17.771 18:04:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.029 [2024-07-24 18:04:04.206797] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.029 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.286 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:18.286 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.544 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:18.544 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:18.801 18:04:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.059 [2024-07-24 18:04:05.186443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.059 18:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:19.316 18:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:21:19.316 18:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:19.316 18:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:19.316 18:04:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:20.688 Initializing NVMe Controllers 00:21:20.688 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:21:20.688 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:21:20.688 Initialization complete. Launching workers. 00:21:20.688 ======================================================== 00:21:20.688 Latency(us) 00:21:20.688 Device Information : IOPS MiB/s Average min max 00:21:20.688 PCIE (0000:0b:00.0) NSID 1 from core 0: 85764.28 335.02 372.51 10.80 7967.50 00:21:20.688 ======================================================== 00:21:20.688 Total : 85764.28 335.02 372.51 10.80 7967.50 00:21:20.688 00:21:20.688 18:04:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:20.688 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.060 Initializing NVMe Controllers 00:21:22.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:22.060 Initialization complete. Launching workers. 00:21:22.061 ======================================================== 00:21:22.061 Latency(us) 00:21:22.061 Device Information : IOPS MiB/s Average min max 00:21:22.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.96 0.31 12545.15 181.88 45994.03 00:21:22.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.96 0.26 15279.90 4970.81 51875.00 00:21:22.061 ======================================================== 00:21:22.061 Total : 145.92 0.57 13781.40 181.88 51875.00 00:21:22.061 00:21:22.061 18:04:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:22.061 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.439 Initializing NVMe Controllers 00:21:23.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.439 Initialization complete. Launching workers. 00:21:23.439 ======================================================== 00:21:23.439 Latency(us) 00:21:23.439 Device Information : IOPS MiB/s Average min max 00:21:23.439 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8492.99 33.18 3783.75 541.89 7621.76 00:21:23.439 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.00 15.13 8296.53 6452.88 15961.13 00:21:23.439 ======================================================== 00:21:23.439 Total : 12366.99 48.31 5197.39 541.89 15961.13 00:21:23.439 00:21:23.439 18:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:23.439 18:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:23.439 18:04:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.746 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.275 Initializing NVMe Controllers 00:21:26.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.275 Controller IO queue size 128, less than required. 00:21:26.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.275 Controller IO queue size 128, less than required. 00:21:26.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.275 Initialization complete. Launching workers. 00:21:26.275 ======================================================== 00:21:26.275 Latency(us) 00:21:26.275 Device Information : IOPS MiB/s Average min max 00:21:26.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1183.20 295.80 110359.44 72282.29 165169.75 00:21:26.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.61 143.65 229974.28 80351.74 329122.35 00:21:26.275 ======================================================== 00:21:26.275 Total : 1757.81 439.45 149460.37 72282.29 329122.35 00:21:26.275 00:21:26.275 18:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:26.275 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.275 No valid NVMe controllers or AIO or URING devices found 00:21:26.275 Initializing NVMe Controllers 00:21:26.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.275 Controller IO queue size 128, less than required. 00:21:26.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.275 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:26.275 Controller IO queue size 128, less than required. 00:21:26.276 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.276 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:26.276 WARNING: Some requested NVMe devices were skipped 00:21:26.533 18:04:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:26.533 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.062 Initializing NVMe Controllers 00:21:29.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.062 Controller IO queue size 128, less than required. 00:21:29.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.062 Controller IO queue size 128, less than required. 00:21:29.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:29.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:29.062 Initialization complete. Launching workers. 00:21:29.062 00:21:29.062 ==================== 00:21:29.062 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:29.062 TCP transport: 00:21:29.062 polls: 21242 00:21:29.062 idle_polls: 7456 00:21:29.062 sock_completions: 13786 00:21:29.062 nvme_completions: 3991 00:21:29.062 submitted_requests: 6014 00:21:29.062 queued_requests: 1 00:21:29.062 00:21:29.062 ==================== 00:21:29.062 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:29.062 TCP transport: 00:21:29.062 polls: 24460 00:21:29.062 idle_polls: 9930 00:21:29.062 sock_completions: 14530 00:21:29.062 nvme_completions: 4925 00:21:29.062 submitted_requests: 7426 00:21:29.062 queued_requests: 1 00:21:29.062 ======================================================== 00:21:29.062 Latency(us) 00:21:29.062 Device Information : IOPS MiB/s Average min max 00:21:29.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.49 249.37 131915.91 69878.05 207124.75 00:21:29.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1230.99 307.75 105158.69 60618.77 153679.70 00:21:29.062 ======================================================== 00:21:29.062 Total : 2228.49 557.12 117135.50 60618.77 207124.75 00:21:29.062 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:29.062 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:29.062 rmmod nvme_tcp 00:21:29.062 rmmod nvme_fabrics 00:21:29.320 rmmod nvme_keyring 00:21:29.320 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:29.320 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2842701 ']' 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2842701 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2842701 ']' 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2842701 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2842701 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2842701' 00:21:29.321 killing process with pid 2842701 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2842701 00:21:29.321 18:04:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2842701 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.221 18:04:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.123 18:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.123 00:21:33.123 real 0m21.202s 00:21:33.123 user 1m5.501s 00:21:33.123 sys 0m5.210s 00:21:33.123 18:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.123 18:04:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.123 ************************************ 00:21:33.123 END TEST nvmf_perf 00:21:33.123 ************************************ 00:21:33.123 18:04:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.124 ************************************ 00:21:33.124 START TEST nvmf_fio_host 00:21:33.124 ************************************ 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.124 * Looking for test storage... 00:21:33.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.124 18:04:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:35.025 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:35.025 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.025 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:35.026 Found net devices under 0000:09:00.0: cvl_0_0 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:35.026 Found net devices under 0000:09:00.1: cvl_0_1 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:21:35.026 00:21:35.026 --- 10.0.0.2 ping statistics --- 00:21:35.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.026 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:35.026 00:21:35.026 --- 10.0.0.1 ping statistics --- 00:21:35.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.026 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2846659 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2846659 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2846659 ']' 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.026 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.285 [2024-07-24 18:04:21.316670] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:35.285 [2024-07-24 18:04:21.316764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.285 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.285 [2024-07-24 18:04:21.383907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.285 [2024-07-24 18:04:21.497235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.285 [2024-07-24 18:04:21.497288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.285 [2024-07-24 18:04:21.497302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.285 [2024-07-24 18:04:21.497313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.285 [2024-07-24 18:04:21.497330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.285 [2024-07-24 18:04:21.497408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.285 [2024-07-24 18:04:21.497492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.285 [2024-07-24 18:04:21.497558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.285 [2024-07-24 18:04:21.497561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.543 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.543 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:35.543 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.800 [2024-07-24 18:04:21.908558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.800 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:35.800 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.800 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.800 18:04:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:36.058 Malloc1 00:21:36.058 18:04:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.316 18:04:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:36.573 18:04:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.830 [2024-07-24 18:04:22.922664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.830 18:04:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:37.088 18:04:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:37.345 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:37.345 fio-3.35 00:21:37.345 Starting 1 thread 00:21:37.345 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.868 00:21:39.868 test: (groupid=0, jobs=1): err= 0: pid=2847026: Wed Jul 24 18:04:25 2024 00:21:39.868 read: IOPS=8999, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:21:39.868 slat (usec): min=2, max=107, avg= 2.69, stdev= 1.55 00:21:39.868 clat (usec): min=2254, max=13666, avg=7821.40, stdev=580.51 00:21:39.868 lat (usec): min=2279, max=13669, avg=7824.08, stdev=580.43 00:21:39.868 clat percentiles (usec): 00:21:39.868 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:21:39.868 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:21:39.868 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:21:39.868 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[12780], 00:21:39.868 | 99.99th=[13698] 00:21:39.868 bw ( KiB/s): min=34744, max=36704, per=99.97%, avg=35984.00, stdev=863.14, samples=4 00:21:39.868 iops : min= 8686, max= 9176, avg=8996.00, stdev=215.78, samples=4 00:21:39.868 write: IOPS=9017, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec); 0 zone resets 00:21:39.868 slat (nsec): min=2287, max=86845, avg=2831.56, stdev=1185.41 00:21:39.868 clat (usec): min=1045, max=12941, avg=6306.74, stdev=520.33 00:21:39.868 lat (usec): min=1052, max=12944, avg=6309.58, stdev=520.31 00:21:39.868 clat percentiles (usec): 00:21:39.868 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:21:39.868 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:21:39.868 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:21:39.868 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[11076], 99.95th=[11863], 00:21:39.868 | 99.99th=[12911] 00:21:39.868 bw ( KiB/s): min=35656, max=36416, per=100.00%, avg=36082.00, stdev=364.06, samples=4 00:21:39.868 iops : min= 8914, max= 9104, avg=9020.50, stdev=91.01, samples=4 00:21:39.868 lat (msec) : 2=0.02%, 4=0.11%, 10=99.72%, 20=0.15% 00:21:39.868 cpu : usr=58.03%, sys=36.74%, ctx=74, majf=0, minf=38 00:21:39.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:39.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.868 issued rwts: total=18061,18099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.868 00:21:39.868 Run status group 0 (all jobs): 00:21:39.868 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2007-2007msec 00:21:39.868 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:39.868 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:39.869 18:04:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:40.132 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:40.132 fio-3.35 00:21:40.132 Starting 1 thread 00:21:40.132 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.506 [2024-07-24 18:04:27.417718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609c10 is same with the state(6) to be set 00:21:41.506 [2024-07-24 18:04:27.417781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x609c10 is same with the state(6) to be set 00:21:42.439 00:21:42.439 test: (groupid=0, jobs=1): err= 0: pid=2847394: Wed Jul 24 18:04:28 2024 00:21:42.439 read: IOPS=7097, BW=111MiB/s (116MB/s)(223MiB/2007msec) 00:21:42.439 slat (usec): min=2, max=108, avg= 3.88, stdev= 2.05 00:21:42.439 clat (usec): min=2725, max=22448, avg=10387.97, stdev=2599.60 00:21:42.439 lat (usec): min=2728, max=22452, avg=10391.85, stdev=2599.72 00:21:42.439 clat percentiles (usec): 00:21:42.439 | 1.00th=[ 5014], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8225], 00:21:42.439 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10683], 00:21:42.439 | 70.00th=[11469], 80.00th=[12387], 90.00th=[13698], 95.00th=[14877], 00:21:42.439 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19792], 99.95th=[20317], 00:21:42.439 | 99.99th=[21103] 00:21:42.439 bw ( KiB/s): min=45984, max=74176, per=49.54%, avg=56256.00, stdev=12394.95, samples=4 00:21:42.439 iops : min= 2874, max= 4636, avg=3516.00, stdev=774.68, samples=4 00:21:42.439 write: IOPS=4089, BW=63.9MiB/s (67.0MB/s)(116MiB/1810msec); 0 zone resets 00:21:42.439 slat (usec): min=30, max=284, avg=34.58, stdev= 8.42 00:21:42.439 clat (usec): min=6620, max=26103, avg=13935.10, stdev=3813.66 00:21:42.439 lat (usec): min=6652, max=26136, avg=13969.68, stdev=3814.87 00:21:42.439 clat percentiles (usec): 00:21:42.439 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10290], 00:21:42.440 | 30.00th=[10945], 40.00th=[11994], 50.00th=[13566], 60.00th=[15008], 00:21:42.440 | 70.00th=[16712], 80.00th=[17695], 90.00th=[19268], 95.00th=[20317], 00:21:42.440 | 99.00th=[22152], 99.50th=[22676], 99.90th=[25560], 99.95th=[25822], 00:21:42.440 | 99.99th=[26084] 00:21:42.440 bw ( KiB/s): min=48704, max=76864, per=89.72%, avg=58704.00, stdev=12594.44, samples=4 00:21:42.440 iops : min= 3044, max= 4804, avg=3669.00, stdev=787.15, samples=4 00:21:42.440 lat (msec) : 4=0.23%, 10=36.85%, 20=60.79%, 50=2.13% 00:21:42.440 cpu : usr=71.25%, sys=24.71%, ctx=57, majf=0, minf=62 00:21:42.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:42.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:42.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:42.440 issued rwts: total=14245,7402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:42.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:42.440 00:21:42.440 Run status group 0 (all jobs): 00:21:42.440 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=223MiB (233MB), run=2007-2007msec 00:21:42.440 WRITE: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=116MiB (121MB), run=1810-1810msec 00:21:42.440 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.697 rmmod nvme_tcp 00:21:42.697 rmmod nvme_fabrics 00:21:42.697 rmmod nvme_keyring 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2846659 ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2846659 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2846659 ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2846659 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2846659 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2846659' 00:21:42.697 killing process with pid 2846659 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2846659 00:21:42.697 18:04:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2846659 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.955 18:04:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.493 00:21:45.493 real 0m12.132s 00:21:45.493 user 0m35.054s 00:21:45.493 sys 0m4.451s 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.493 ************************************ 00:21:45.493 END TEST nvmf_fio_host 00:21:45.493 ************************************ 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.493 18:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.493 ************************************ 00:21:45.493 START TEST nvmf_failover 00:21:45.494 ************************************ 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:45.494 * Looking for test storage... 00:21:45.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.494 18:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:46.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:46.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:46.904 Found net devices under 0000:09:00.0: cvl_0_0 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:46.904 Found net devices under 0000:09:00.1: cvl_0_1 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.904 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.905 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.162 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:21:47.162 00:21:47.162 --- 10.0.0.2 ping statistics --- 00:21:47.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.163 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:47.163 00:21:47.163 --- 10.0.0.1 ping statistics --- 00:21:47.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.163 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2849665 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2849665 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2849665 ']' 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.163 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.163 [2024-07-24 18:04:33.383449] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:21:47.163 [2024-07-24 18:04:33.383526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.163 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.421 [2024-07-24 18:04:33.450608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.421 [2024-07-24 18:04:33.563262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.421 [2024-07-24 18:04:33.563316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.421 [2024-07-24 18:04:33.563329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.421 [2024-07-24 18:04:33.563340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.421 [2024-07-24 18:04:33.563350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.421 [2024-07-24 18:04:33.563444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.421 [2024-07-24 18:04:33.563508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.421 [2024-07-24 18:04:33.563511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.421 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.421 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:47.421 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.421 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.421 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:47.679 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.679 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.679 [2024-07-24 18:04:33.937517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.937 18:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:48.195 Malloc0 00:21:48.195 18:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.452 18:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.710 18:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.967 [2024-07-24 18:04:34.988529] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.967 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:48.967 [2024-07-24 18:04:35.229181] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.225 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:49.483 [2024-07-24 18:04:35.522131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2849952 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2849952 /var/tmp/bdevperf.sock 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2849952 ']' 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.483 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.741 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.741 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:49.741 18:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.999 NVMe0n1 00:21:49.999 18:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.564 00:21:50.564 18:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2850089 00:21:50.564 18:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.564 18:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:51.498 18:04:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.756 [2024-07-24 18:04:37.956556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.956999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 [2024-07-24 18:04:37.957085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2471f40 is same with the state(6) to be set 00:21:51.756 18:04:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:55.038 18:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.296 00:21:55.296 18:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:55.554 18:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:58.834 18:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.834 [2024-07-24 18:04:44.845849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.834 18:04:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:59.768 18:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:00.026 18:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2850089 00:22:06.589 0 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2849952 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2849952 ']' 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2849952 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2849952 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2849952' 00:22:06.589 killing process with pid 2849952 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2849952 00:22:06.589 18:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2849952 00:22:06.589 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.589 [2024-07-24 18:04:35.586753] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:22:06.590 [2024-07-24 18:04:35.586845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849952 ] 00:22:06.590 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.590 [2024-07-24 18:04:35.645315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.590 [2024-07-24 18:04:35.755672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.590 Running I/O for 15 seconds... 00:22:06.590 [2024-07-24 18:04:37.957504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.957544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.957591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.957997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.958026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.590 [2024-07-24 18:04:37.958563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.590 [2024-07-24 18:04:37.958578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.590 [2024-07-24 18:04:37.958592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.958980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.958993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.591 [2024-07-24 18:04:37.959747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.591 [2024-07-24 18:04:37.959762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.959984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.959999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.592 [2024-07-24 18:04:37.960885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.592 [2024-07-24 18:04:37.960900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.960913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.960928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.960941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.960955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.960969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.960983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.593 [2024-07-24 18:04:37.961482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaedc10 is same with the state(6) to be set 00:22:06.593 [2024-07-24 18:04:37.961513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.593 [2024-07-24 18:04:37.961524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.593 [2024-07-24 18:04:37.961535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76128 len:8 PRP1 0x0 PRP2 0x0 00:22:06.593 [2024-07-24 18:04:37.961548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961604] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaedc10 was disconnected and freed. reset controller. 00:22:06.593 [2024-07-24 18:04:37.961621] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:06.593 [2024-07-24 18:04:37.961668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.593 [2024-07-24 18:04:37.961687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.593 [2024-07-24 18:04:37.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.593 [2024-07-24 18:04:37.961742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.593 [2024-07-24 18:04:37.961769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:37.961782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.593 [2024-07-24 18:04:37.965138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.593 [2024-07-24 18:04:37.965178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad00f0 (9): Bad file descriptor 00:22:06.593 [2024-07-24 18:04:38.006434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.593 [2024-07-24 18:04:41.577795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.577854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.577895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.577941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.577955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.577970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.577984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.577999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.593 [2024-07-24 18:04:41.578200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.593 [2024-07-24 18:04:41.578215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.594 [2024-07-24 18:04:41.578953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.594 [2024-07-24 18:04:41.578968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.578981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.578995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.595 [2024-07-24 18:04:41.579481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.595 [2024-07-24 18:04:41.579933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.595 [2024-07-24 18:04:41.579947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.579962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.579976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.579991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.580033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.580061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.580112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.596 [2024-07-24 18:04:41.580781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.580968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.580986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.581001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.581015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.596 [2024-07-24 18:04:41.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.596 [2024-07-24 18:04:41.581044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.597 [2024-07-24 18:04:41.581270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.597 [2024-07-24 18:04:41.581730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaefa70 is same with the state(6) to be set 00:22:06.597 [2024-07-24 18:04:41.581764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.597 [2024-07-24 18:04:41.581776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.597 [2024-07-24 18:04:41.581788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73752 len:8 PRP1 0x0 PRP2 0x0 00:22:06.597 [2024-07-24 18:04:41.581801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581857] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaefa70 was disconnected and freed. reset controller. 00:22:06.597 [2024-07-24 18:04:41.581875] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:06.597 [2024-07-24 18:04:41.581921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.597 [2024-07-24 18:04:41.581941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.597 [2024-07-24 18:04:41.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.581984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.597 [2024-07-24 18:04:41.581998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.582012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.597 [2024-07-24 18:04:41.582025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:41.582038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.597 [2024-07-24 18:04:41.585355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.597 [2024-07-24 18:04:41.585411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad00f0 (9): Bad file descriptor 00:22:06.597 [2024-07-24 18:04:41.708305] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.597 [2024-07-24 18:04:46.144826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.597 [2024-07-24 18:04:46.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.597 [2024-07-24 18:04:46.144926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.144942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.144960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.144974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.144990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.598 [2024-07-24 18:04:46.145644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.145980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.145993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.598 [2024-07-24 18:04:46.146177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.598 [2024-07-24 18:04:46.146191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.599 [2024-07-24 18:04:46.146372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.146968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.146983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.599 [2024-07-24 18:04:46.147000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.599 [2024-07-24 18:04:46.147016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.147976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.147990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.600 [2024-07-24 18:04:46.148244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.600 [2024-07-24 18:04:46.148258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.601 [2024-07-24 18:04:46.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.601 [2024-07-24 18:04:46.148630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.601 [2024-07-24 18:04:46.148832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaefa70 is same with the state(6) to be set 00:22:06.601 [2024-07-24 18:04:46.148862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.601 [2024-07-24 18:04:46.148873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.601 [2024-07-24 18:04:46.148884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27920 len:8 PRP1 0x0 PRP2 0x0 00:22:06.601 [2024-07-24 18:04:46.148897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.148961] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaefa70 was disconnected and freed. reset controller. 00:22:06.601 [2024-07-24 18:04:46.148979] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:06.601 [2024-07-24 18:04:46.149027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.601 [2024-07-24 18:04:46.149050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.149067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.601 [2024-07-24 18:04:46.149081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.149095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.601 [2024-07-24 18:04:46.149116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.149138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.601 [2024-07-24 18:04:46.149152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.601 [2024-07-24 18:04:46.149165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.601 [2024-07-24 18:04:46.149205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad00f0 (9): Bad file descriptor 00:22:06.601 [2024-07-24 18:04:46.152518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.601 [2024-07-24 18:04:46.321424] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.601 00:22:06.601 Latency(us) 00:22:06.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:06.601 Verification LBA range: start 0x0 length 0x4000 00:22:06.601 NVMe0n1 : 15.01 8383.19 32.75 861.47 0.00 13816.31 843.47 15243.19 00:22:06.601 =================================================================================================================== 00:22:06.601 Total : 8383.19 32.75 861.47 0.00 13816.31 843.47 15243.19 00:22:06.601 Received shutdown signal, test time was about 15.000000 seconds 00:22:06.601 00:22:06.601 Latency(us) 00:22:06.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.601 =================================================================================================================== 00:22:06.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2851821 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2851821 /var/tmp/bdevperf.sock 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2851821 ']' 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.601 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.602 [2024-07-24 18:04:52.703326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:06.602 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:06.859 [2024-07-24 18:04:52.952020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:06.859 18:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.424 NVMe0n1 00:22:07.424 18:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.681 00:22:07.681 18:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.939 00:22:07.939 18:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.939 18:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:08.197 18:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.460 18:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:11.775 18:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.775 18:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:11.775 18:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2852484 00:22:11.775 18:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.775 18:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2852484 00:22:12.707 0 00:22:12.707 18:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:12.707 [2024-07-24 18:04:52.204213] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:22:12.707 [2024-07-24 18:04:52.204318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851821 ] 00:22:12.707 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.707 [2024-07-24 18:04:52.264605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.707 [2024-07-24 18:04:52.377160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.707 [2024-07-24 18:04:54.541658] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:12.707 [2024-07-24 18:04:54.541754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.707 [2024-07-24 18:04:54.541777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.707 [2024-07-24 18:04:54.541793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.707 [2024-07-24 18:04:54.541807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.707 [2024-07-24 18:04:54.541821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.707 [2024-07-24 18:04:54.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.707 [2024-07-24 18:04:54.541848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.707 [2024-07-24 18:04:54.541862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.707 [2024-07-24 18:04:54.541876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.707 [2024-07-24 18:04:54.541921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.707 [2024-07-24 18:04:54.541953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7a0f0 (9): Bad file descriptor 00:22:12.707 [2024-07-24 18:04:54.595656] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.707 Running I/O for 1 seconds... 00:22:12.707 00:22:12.707 Latency(us) 00:22:12.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.707 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:12.707 Verification LBA range: start 0x0 length 0x4000 00:22:12.707 NVMe0n1 : 1.01 8659.55 33.83 0.00 0.00 14719.03 3034.07 15728.64 00:22:12.707 =================================================================================================================== 00:22:12.707 Total : 8659.55 33.83 0.00 0.00 14719.03 3034.07 15728.64 00:22:12.707 18:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.707 18:04:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:12.965 18:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.222 18:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.222 18:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:13.479 18:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.737 18:04:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:17.012 18:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.012 18:05:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2851821 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2851821 ']' 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2851821 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851821 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851821' 00:22:17.012 killing process with pid 2851821 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2851821 00:22:17.012 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2851821 00:22:17.269 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:17.269 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:17.526 rmmod nvme_tcp 00:22:17.526 rmmod nvme_fabrics 00:22:17.526 rmmod nvme_keyring 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2849665 ']' 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2849665 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2849665 ']' 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2849665 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2849665 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2849665' 00:22:17.526 killing process with pid 2849665 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2849665 00:22:17.526 18:05:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2849665 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.092 18:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.994 00:22:19.994 real 0m34.905s 00:22:19.994 user 2m2.574s 00:22:19.994 sys 0m6.054s 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:19.994 ************************************ 00:22:19.994 END TEST nvmf_failover 00:22:19.994 ************************************ 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.994 ************************************ 00:22:19.994 START TEST nvmf_host_discovery 00:22:19.994 ************************************ 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:19.994 * Looking for test storage... 00:22:19.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:19.994 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.995 18:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:21.926 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:21.926 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:21.926 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:21.927 Found net devices under 0000:09:00.0: cvl_0_0 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:21.927 Found net devices under 0000:09:00.1: cvl_0_1 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.927 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:22.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:22:22.185 00:22:22.185 --- 10.0.0.2 ping statistics --- 00:22:22.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.185 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:22:22.185 00:22:22.185 --- 10.0.0.1 ping statistics --- 00:22:22.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.185 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2855205 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2855205 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2855205 ']' 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.185 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.185 [2024-07-24 18:05:08.390744] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:22:22.185 [2024-07-24 18:05:08.390814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.185 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.185 [2024-07-24 18:05:08.453432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.443 [2024-07-24 18:05:08.560691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.443 [2024-07-24 18:05:08.560757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.443 [2024-07-24 18:05:08.560796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.443 [2024-07-24 18:05:08.560808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.443 [2024-07-24 18:05:08.560818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.443 [2024-07-24 18:05:08.560843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.443 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.443 [2024-07-24 18:05:08.709474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.702 [2024-07-24 18:05:08.717683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.702 null0 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.702 null1 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2855226 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2855226 /tmp/host.sock 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2855226 ']' 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:22.702 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.702 18:05:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.702 [2024-07-24 18:05:08.796442] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:22:22.702 [2024-07-24 18:05:08.796526] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855226 ] 00:22:22.702 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.702 [2024-07-24 18:05:08.861095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.960 [2024-07-24 18:05:08.973939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.960 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.961 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 [2024-07-24 18:05:09.391446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.477 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.477 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:23.477 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:23.478 18:05:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:24.044 [2024-07-24 18:05:10.117192] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:24.044 [2024-07-24 18:05:10.117244] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:24.044 [2024-07-24 18:05:10.117268] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.044 [2024-07-24 18:05:10.203573] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:24.044 [2024-07-24 18:05:10.268057] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:24.044 [2024-07-24 18:05:10.268096] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.302 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:24.560 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.561 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 18:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 [2024-07-24 18:05:11.028486] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.819 [2024-07-24 18:05:11.029377] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:24.819 [2024-07-24 18:05:11.029416] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:24.819 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.820 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 [2024-07-24 18:05:11.116063] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:25.078 18:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:25.078 [2024-07-24 18:05:11.221772] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:25.078 [2024-07-24 18:05:11.221799] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:25.078 [2024-07-24 18:05:11.221809] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.012 [2024-07-24 18:05:12.252294] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.012 [2024-07-24 18:05:12.252331] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.012 [2024-07-24 18:05:12.259358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.012 id:0 cdw10:00000000 cdw11:00000000 00:22:26.012 [2024-07-24 18:05:12.259408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.012 [2024-07-24 18:05:12.259427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.012 [2024-07-24 18:05:12.259442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.012 [2024-07-24 18:05:12.259466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.012 [2024-07-24 18:05:12.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.012 [2024-07-24 18:05:12.259501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.012 [2024-07-24 18:05:12.259515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.012 [2024-07-24 18:05:12.259528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.012 [2024-07-24 18:05:12.269364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.012 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.012 [2024-07-24 18:05:12.279419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.012 [2024-07-24 18:05:12.279647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.012 [2024-07-24 18:05:12.279677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.012 [2024-07-24 18:05:12.279695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.012 [2024-07-24 18:05:12.279717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.012 [2024-07-24 18:05:12.279751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.012 [2024-07-24 18:05:12.279769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.012 [2024-07-24 18:05:12.279784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.012 [2024-07-24 18:05:12.279806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.271 [2024-07-24 18:05:12.289506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.271 [2024-07-24 18:05:12.289724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.271 [2024-07-24 18:05:12.289752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.271 [2024-07-24 18:05:12.289768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.271 [2024-07-24 18:05:12.289790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.271 [2024-07-24 18:05:12.289812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.271 [2024-07-24 18:05:12.289826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.271 [2024-07-24 18:05:12.289839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.271 [2024-07-24 18:05:12.289858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.271 [2024-07-24 18:05:12.299589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.271 [2024-07-24 18:05:12.299814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.271 [2024-07-24 18:05:12.299859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.271 [2024-07-24 18:05:12.299879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.271 [2024-07-24 18:05:12.299905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.271 [2024-07-24 18:05:12.299941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.271 [2024-07-24 18:05:12.299960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.271 [2024-07-24 18:05:12.299976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.271 [2024-07-24 18:05:12.299998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.271 [2024-07-24 18:05:12.309673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.271 [2024-07-24 18:05:12.309870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.271 [2024-07-24 18:05:12.309903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.271 [2024-07-24 18:05:12.309921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.271 [2024-07-24 18:05:12.309947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.271 [2024-07-24 18:05:12.309983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.271 [2024-07-24 18:05:12.310003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.271 [2024-07-24 18:05:12.310019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.271 [2024-07-24 18:05:12.310041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.271 [2024-07-24 18:05:12.319754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.271 [2024-07-24 18:05:12.319971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.271 [2024-07-24 18:05:12.319999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.271 [2024-07-24 18:05:12.320015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.271 [2024-07-24 18:05:12.320038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.271 [2024-07-24 18:05:12.320082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.271 [2024-07-24 18:05:12.320129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.271 [2024-07-24 18:05:12.320146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.271 [2024-07-24 18:05:12.320166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.271 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.271 [2024-07-24 18:05:12.329827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:26.271 [2024-07-24 18:05:12.330021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.271 [2024-07-24 18:05:12.330048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036c20 with addr=10.0.0.2, port=4420 00:22:26.271 [2024-07-24 18:05:12.330064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036c20 is same with the state(6) to be set 00:22:26.271 [2024-07-24 18:05:12.330100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036c20 (9): Bad file descriptor 00:22:26.271 [2024-07-24 18:05:12.330144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:26.271 [2024-07-24 18:05:12.330162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:26.271 [2024-07-24 18:05:12.330176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:26.272 [2024-07-24 18:05:12.330195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:26.272 [2024-07-24 18:05:12.339201] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:26.272 [2024-07-24 18:05:12.339233] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.530 18:05:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.463 [2024-07-24 18:05:13.616917] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:27.463 [2024-07-24 18:05:13.616951] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:27.463 [2024-07-24 18:05:13.616981] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:27.720 [2024-07-24 18:05:13.745418] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:27.978 [2024-07-24 18:05:14.015418] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:27.978 [2024-07-24 18:05:14.015472] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.978 request: 00:22:27.978 { 00:22:27.978 "name": "nvme", 00:22:27.978 "trtype": "tcp", 00:22:27.978 "traddr": "10.0.0.2", 00:22:27.978 "adrfam": "ipv4", 00:22:27.978 "trsvcid": "8009", 00:22:27.978 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:27.978 "wait_for_attach": true, 00:22:27.978 "method": "bdev_nvme_start_discovery", 00:22:27.978 "req_id": 1 00:22:27.978 } 00:22:27.978 Got JSON-RPC error response 00:22:27.978 response: 00:22:27.978 { 00:22:27.978 "code": -17, 00:22:27.978 "message": "File exists" 00:22:27.978 } 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.978 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.979 request: 00:22:27.979 { 00:22:27.979 "name": "nvme_second", 00:22:27.979 "trtype": "tcp", 00:22:27.979 "traddr": "10.0.0.2", 00:22:27.979 "adrfam": "ipv4", 00:22:27.979 "trsvcid": "8009", 00:22:27.979 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:27.979 "wait_for_attach": true, 00:22:27.979 "method": "bdev_nvme_start_discovery", 00:22:27.979 "req_id": 1 00:22:27.979 } 00:22:27.979 Got JSON-RPC error response 00:22:27.979 response: 00:22:27.979 { 00:22:27.979 "code": -17, 00:22:27.979 "message": "File exists" 00:22:27.979 } 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.979 18:05:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.352 [2024-07-24 18:05:15.234917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.352 [2024-07-24 18:05:15.234973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a030 with addr=10.0.0.2, port=8010 00:22:29.352 [2024-07-24 18:05:15.235006] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:29.352 [2024-07-24 18:05:15.235021] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:29.352 [2024-07-24 18:05:15.235046] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:30.287 [2024-07-24 18:05:16.237385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.287 [2024-07-24 18:05:16.237461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x203a030 with addr=10.0.0.2, port=8010 00:22:30.287 [2024-07-24 18:05:16.237495] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:30.287 [2024-07-24 18:05:16.237511] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:30.287 [2024-07-24 18:05:16.237526] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:31.256 [2024-07-24 18:05:17.239561] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:31.256 request: 00:22:31.256 { 00:22:31.256 "name": "nvme_second", 00:22:31.256 "trtype": "tcp", 00:22:31.256 "traddr": "10.0.0.2", 00:22:31.256 "adrfam": "ipv4", 00:22:31.256 "trsvcid": "8010", 00:22:31.256 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:31.256 "wait_for_attach": false, 00:22:31.256 "attach_timeout_ms": 3000, 00:22:31.256 "method": "bdev_nvme_start_discovery", 00:22:31.256 "req_id": 1 00:22:31.256 } 00:22:31.256 Got JSON-RPC error response 00:22:31.256 response: 00:22:31.256 { 00:22:31.256 "code": -110, 00:22:31.256 "message": "Connection timed out" 00:22:31.256 } 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2855226 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.256 rmmod nvme_tcp 00:22:31.256 rmmod nvme_fabrics 00:22:31.256 rmmod nvme_keyring 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2855205 ']' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2855205 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2855205 ']' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2855205 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2855205 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2855205' 00:22:31.256 killing process with pid 2855205 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2855205 00:22:31.256 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2855205 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.527 18:05:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.056 00:22:34.056 real 0m13.514s 00:22:34.056 user 0m19.742s 00:22:34.056 sys 0m2.812s 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.056 ************************************ 00:22:34.056 END TEST nvmf_host_discovery 00:22:34.056 ************************************ 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.056 ************************************ 00:22:34.056 START TEST nvmf_host_multipath_status 00:22:34.056 ************************************ 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:34.056 * Looking for test storage... 00:22:34.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.056 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.057 18:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:35.957 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:35.957 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.957 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:35.958 Found net devices under 0000:09:00.0: cvl_0_0 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:35.958 Found net devices under 0000:09:00.1: cvl_0_1 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.958 18:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:22:35.958 00:22:35.958 --- 10.0.0.2 ping statistics --- 00:22:35.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.958 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:35.958 00:22:35.958 --- 10.0.0.1 ping statistics --- 00:22:35.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.958 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2858381 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2858381 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2858381 ']' 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.958 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:35.958 [2024-07-24 18:05:22.101208] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:22:35.958 [2024-07-24 18:05:22.101311] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.958 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.958 [2024-07-24 18:05:22.166020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:36.218 [2024-07-24 18:05:22.277855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.218 [2024-07-24 18:05:22.277908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.218 [2024-07-24 18:05:22.277921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.218 [2024-07-24 18:05:22.277937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.218 [2024-07-24 18:05:22.277947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.218 [2024-07-24 18:05:22.278000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.218 [2024-07-24 18:05:22.278004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2858381 00:22:36.218 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:36.476 [2024-07-24 18:05:22.700597] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.476 18:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:36.735 Malloc0 00:22:36.993 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:36.993 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.251 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.507 [2024-07-24 18:05:23.722517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.507 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.764 [2024-07-24 18:05:23.963201] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2858545 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2858545 /var/tmp/bdevperf.sock 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2858545 ']' 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.764 18:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:38.330 18:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.330 18:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:38.330 18:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:38.330 18:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:38.930 Nvme0n1 00:22:38.930 18:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:39.496 Nvme0n1 00:22:39.496 18:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:39.496 18:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:41.397 18:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:41.397 18:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:41.656 18:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:41.914 18:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:42.849 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:42.849 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:42.849 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.849 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.108 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.108 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:43.108 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.108 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.366 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.366 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.366 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.366 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.625 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.625 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.625 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.625 18:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.883 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.883 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:43.883 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.883 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.142 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.142 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.142 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.142 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.399 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.399 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:44.399 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:44.656 18:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:44.912 18:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:45.844 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:45.844 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:45.844 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.844 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.102 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.102 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:46.102 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.102 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.359 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.359 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.359 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.359 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.617 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.617 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.617 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.617 18:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.875 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.875 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.875 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.875 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:47.132 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.132 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:47.132 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.132 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.389 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.389 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:47.389 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:47.647 18:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:47.904 18:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:48.838 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:48.838 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:48.838 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.838 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:49.097 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.097 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:49.097 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.097 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:49.356 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:49.356 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:49.356 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.356 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.613 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.613 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.613 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.613 18:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.871 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.871 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:49.871 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.871 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:50.133 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.133 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:50.133 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.133 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:50.438 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.438 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:50.438 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:50.727 18:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:50.985 18:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:51.919 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:51.919 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.919 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.919 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:52.179 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.179 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:52.179 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.179 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:52.438 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.438 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:52.438 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.438 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.696 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.696 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.696 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.697 18:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.954 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.954 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.954 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.954 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:53.212 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.212 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:53.212 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.212 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:53.470 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.470 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:53.470 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:53.728 18:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:53.986 18:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:54.920 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:54.920 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:54.920 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.920 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:55.178 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.178 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:55.178 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.178 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.437 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.437 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.437 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.437 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.696 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.696 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.696 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.696 18:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.954 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.954 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:55.954 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.954 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:56.212 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:56.212 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:56.212 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.212 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.470 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:56.470 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:56.470 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:56.728 18:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:56.986 18:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:57.926 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:57.926 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.926 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.926 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:58.183 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.183 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:58.183 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.183 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.441 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.441 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.441 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.441 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.699 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.699 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.699 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.699 18:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.957 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.957 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:58.957 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.957 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:59.214 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.214 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.215 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.215 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.472 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.472 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:59.731 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:59.731 18:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:59.989 18:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:00.247 18:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:01.182 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:01.182 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:01.182 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.182 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.441 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.441 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:01.441 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.441 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.700 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.700 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.700 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.700 18:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.959 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.959 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.959 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.959 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.217 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.217 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:02.217 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.217 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.476 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.476 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:02.476 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.476 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.734 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.734 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:02.734 18:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.992 18:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:03.249 18:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.622 18:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.880 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.880 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.880 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.880 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:05.138 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.138 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:05.139 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.139 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.396 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.397 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.397 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.397 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.654 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.654 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.654 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.654 18:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.912 18:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.912 18:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:05.912 18:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:06.170 18:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:06.428 18:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:07.364 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:07.364 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:07.364 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.364 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.622 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.622 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.622 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.622 18:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.880 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.880 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.880 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.880 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:08.138 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.138 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:08.138 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.138 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.397 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.397 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.397 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.397 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.655 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.655 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.655 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.655 18:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.914 18:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.914 18:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:08.914 18:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:09.171 18:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:09.430 18:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:10.364 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:10.364 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.364 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.364 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.622 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.622 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:10.622 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.622 18:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.880 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.880 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.880 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.880 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.138 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.138 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.138 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.138 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.396 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.396 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.396 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.396 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.654 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.654 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:11.654 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.654 18:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2858545 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2858545 ']' 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2858545 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2858545 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2858545' 00:23:11.912 killing process with pid 2858545 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2858545 00:23:11.912 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2858545 00:23:12.180 Connection closed with partial response: 00:23:12.180 00:23:12.180 00:23:12.180 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2858545 00:23:12.180 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.180 [2024-07-24 18:05:24.025654] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:23:12.180 [2024-07-24 18:05:24.025733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858545 ] 00:23:12.180 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.180 [2024-07-24 18:05:24.083957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.180 [2024-07-24 18:05:24.192575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.180 Running I/O for 90 seconds... 00:23:12.180 [2024-07-24 18:05:39.859755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.859811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.859878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.859899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.859925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.859943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.859966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.859984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.860041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.860097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.860162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.860525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.860541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.861469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.180 [2024-07-24 18:05:39.861492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.861520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.861537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.861561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.180 [2024-07-24 18:05:39.861577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.180 [2024-07-24 18:05:39.861599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.861971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.862974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.181 [2024-07-24 18:05:39.863417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.181 [2024-07-24 18:05:39.863433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.863966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.863989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.864963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.864990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.182 [2024-07-24 18:05:39.865254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.182 [2024-07-24 18:05:39.865287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.865965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.865992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:39.866348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:39.866725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:39.866742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:55.516634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:55.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:55.516727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.183 [2024-07-24 18:05:55.516764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:55.516810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:55.516848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.183 [2024-07-24 18:05:55.516883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.183 [2024-07-24 18:05:55.516919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.516936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.516958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.516990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.517629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.518939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.518964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.518992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.519009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.519049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.519088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.519286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.184 [2024-07-24 18:05:55.519325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.184 [2024-07-24 18:05:55.519492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.184 [2024-07-24 18:05:55.519513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.519790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.519973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.519994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.520031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.520047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.521470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.521517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.521557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.521965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.521987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.185 [2024-07-24 18:05:55.522252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.185 [2024-07-24 18:05:55.522430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.185 [2024-07-24 18:05:55.522447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.522485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.522543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.522608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.522648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.522688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.522711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.522727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.523893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.523970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.523991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.524057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.186 [2024-07-24 18:05:55.524162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:12.186 [2024-07-24 18:05:55.524463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.186 [2024-07-24 18:05:55.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.525780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.525796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.526586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.526630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.526671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.526829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.526963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.526980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.527003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.187 [2024-07-24 18:05:55.527021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.529066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.529138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.529182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.529222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.187 [2024-07-24 18:05:55.529260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.187 [2024-07-24 18:05:55.529282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.529734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.529879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.529916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.529952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.529973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.529989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.531654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.531953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.531970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.532717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.532760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.532797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.532833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.532889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.532931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.532971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.532993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.533014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.188 [2024-07-24 18:05:55.533053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.533075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.533091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.533131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.533150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.188 [2024-07-24 18:05:55.533173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.188 [2024-07-24 18:05:55.533190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.534943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.534967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.534983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.189 [2024-07-24 18:05:55.535471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.189 [2024-07-24 18:05:55.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.189 [2024-07-24 18:05:55.535622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.535644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.535659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.536895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.536930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.536970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.536991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.537007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.538589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.538628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.538672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.538712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.538997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.539035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.539347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.539400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.539440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.190 [2024-07-24 18:05:55.539484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.190 [2024-07-24 18:05:55.539523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:12.190 [2024-07-24 18:05:55.539546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.539577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.539774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.539879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.539900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.539916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.541836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.541960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.541976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.542028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.542078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.542144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.542187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.542225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.542302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.542362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.542379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.543651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.191 [2024-07-24 18:05:55.543695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.543734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.543788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.191 [2024-07-24 18:05:55.543828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.191 [2024-07-24 18:05:55.543850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.543866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.543888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.543905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.543947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.543970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.543987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.544157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.544360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.544598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.544634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.544694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.544709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.545813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.545835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.545860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.545876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.545897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.545927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.545949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.545965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.192 [2024-07-24 18:05:55.546798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:12.192 [2024-07-24 18:05:55.546932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.192 [2024-07-24 18:05:55.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.546987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.547004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.547026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.547059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.547082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.547099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.547132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.547151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.548521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.548562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.548642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.548719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.548964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.548979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.549065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.549304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.549320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.559371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.559437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.559489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.559525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.559560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.193 [2024-07-24 18:05:55.559601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.559637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.193 [2024-07-24 18:05:55.559673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:12.193 [2024-07-24 18:05:55.559693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.559707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.559743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.559778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.559814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.559849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.559884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.559905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.559920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.562842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.562976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.562992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.563015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.563032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.563054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.564486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.564537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.564579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.564634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.194 [2024-07-24 18:05:55.564730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.194 [2024-07-24 18:05:55.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:12.194 [2024-07-24 18:05:55.564829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.564846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.564868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.564884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.564906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.564923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.564945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.564962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.564984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.565001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.565039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.565077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.565132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.565172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.565210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.565249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.565271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.565288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.566924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.566965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.566988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.567795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.195 [2024-07-24 18:05:55.567834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:12.195 [2024-07-24 18:05:55.567856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.195 [2024-07-24 18:05:55.567873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.567895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.567912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.567934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.567951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.567990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.568494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.568570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.568587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.569962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.570078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.570125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.570258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.570302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.570391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.570499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.570514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.571696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.571720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.571763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.571805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.196 [2024-07-24 18:05:55.571821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:12.196 [2024-07-24 18:05:55.571843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.196 [2024-07-24 18:05:55.571860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.571897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.571918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.197 [2024-07-24 18:05:55.571934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.571956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.571992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.197 [2024-07-24 18:05:55.572082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.197 [2024-07-24 18:05:55.572183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.197 [2024-07-24 18:05:55.572292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.572962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.572985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:12.197 [2024-07-24 18:05:55.573252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:12.197 [2024-07-24 18:05:55.573273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.197 [2024-07-24 18:05:55.573304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:12.197 Received shutdown signal, test time was about 32.387629 seconds 00:23:12.197 00:23:12.197 Latency(us) 00:23:12.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.197 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:12.197 Verification LBA range: start 0x0 length 0x4000 00:23:12.197 Nvme0n1 : 32.39 8039.87 31.41 0.00 0.00 15893.64 464.21 4026531.84 00:23:12.197 =================================================================================================================== 00:23:12.197 Total : 8039.87 31.41 0.00 0.00 15893.64 464.21 4026531.84 00:23:12.197 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.456 rmmod nvme_tcp 00:23:12.456 rmmod nvme_fabrics 00:23:12.456 rmmod nvme_keyring 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2858381 ']' 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2858381 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2858381 ']' 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2858381 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2858381 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2858381' 00:23:12.456 killing process with pid 2858381 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2858381 00:23:12.456 18:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2858381 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.023 18:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.926 00:23:14.926 real 0m41.291s 00:23:14.926 user 2m4.142s 00:23:14.926 sys 0m10.541s 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 ************************************ 00:23:14.926 END TEST nvmf_host_multipath_status 00:23:14.926 ************************************ 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.926 ************************************ 00:23:14.926 START TEST nvmf_discovery_remove_ifc 00:23:14.926 ************************************ 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:14.926 * Looking for test storage... 00:23:14.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.926 18:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:16.824 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:16.824 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:16.825 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:16.825 Found net devices under 0000:09:00.0: cvl_0_0 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:16.825 Found net devices under 0000:09:00.1: cvl_0_1 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.825 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:17.084 00:23:17.084 --- 10.0.0.2 ping statistics --- 00:23:17.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.084 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:23:17.084 00:23:17.084 --- 10.0.0.1 ping statistics --- 00:23:17.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.084 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2864862 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2864862 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2864862 ']' 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.084 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.084 [2024-07-24 18:06:03.227201] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:23:17.084 [2024-07-24 18:06:03.227288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.084 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.084 [2024-07-24 18:06:03.293927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.342 [2024-07-24 18:06:03.406610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.342 [2024-07-24 18:06:03.406665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.342 [2024-07-24 18:06:03.406688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.342 [2024-07-24 18:06:03.406704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.342 [2024-07-24 18:06:03.406719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.342 [2024-07-24 18:06:03.406772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.342 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.342 [2024-07-24 18:06:03.559831] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.342 [2024-07-24 18:06:03.567984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:17.342 null0 00:23:17.342 [2024-07-24 18:06:03.599954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2864983 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2864983 /tmp/host.sock 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2864983 ']' 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:17.601 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.601 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.601 [2024-07-24 18:06:03.671940] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:23:17.601 [2024-07-24 18:06:03.672028] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864983 ] 00:23:17.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.601 [2024-07-24 18:06:03.738367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.601 [2024-07-24 18:06:03.854371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.860 18:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.860 18:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.860 18:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:17.860 18:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.860 18:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.843 [2024-07-24 18:06:05.027536] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.843 [2024-07-24 18:06:05.027569] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.843 [2024-07-24 18:06:05.027592] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.101 [2024-07-24 18:06:05.113866] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:19.101 [2024-07-24 18:06:05.340232] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:19.101 [2024-07-24 18:06:05.340307] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:19.101 [2024-07-24 18:06:05.340352] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:19.101 [2024-07-24 18:06:05.340376] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.101 [2024-07-24 18:06:05.340426] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.101 [2024-07-24 18:06:05.346362] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18a48e0 was disconnected and freed. delete nvme_qpair. 00:23:19.101 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:19.359 18:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.289 18:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:21.659 18:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.591 18:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.523 18:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.457 18:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.714 [2024-07-24 18:06:10.781463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:24.714 [2024-07-24 18:06:10.781542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.714 [2024-07-24 18:06:10.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.714 [2024-07-24 18:06:10.781579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.714 [2024-07-24 18:06:10.781591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.714 [2024-07-24 18:06:10.781605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.714 [2024-07-24 18:06:10.781617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.714 [2024-07-24 18:06:10.781630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.714 [2024-07-24 18:06:10.781642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.714 [2024-07-24 18:06:10.781656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.714 [2024-07-24 18:06:10.781668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.714 [2024-07-24 18:06:10.781680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186b320 is same with the state(6) to be set 00:23:24.714 [2024-07-24 18:06:10.791469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186b320 (9): Bad file descriptor 00:23:24.714 [2024-07-24 18:06:10.801516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.647 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.647 [2024-07-24 18:06:11.858150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:25.647 [2024-07-24 18:06:11.858222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186b320 with addr=10.0.0.2, port=4420 00:23:25.648 [2024-07-24 18:06:11.858251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186b320 is same with the state(6) to be set 00:23:25.648 [2024-07-24 18:06:11.858307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186b320 (9): Bad file descriptor 00:23:25.648 [2024-07-24 18:06:11.858812] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:25.648 [2024-07-24 18:06:11.858861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.648 [2024-07-24 18:06:11.858880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.648 [2024-07-24 18:06:11.858897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.648 [2024-07-24 18:06:11.858933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.648 [2024-07-24 18:06:11.858953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.648 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.648 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:25.648 18:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.020 [2024-07-24 18:06:12.861469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:27.020 [2024-07-24 18:06:12.861520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:27.020 [2024-07-24 18:06:12.861534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:27.020 [2024-07-24 18:06:12.861548] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:27.020 [2024-07-24 18:06:12.861579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.020 [2024-07-24 18:06:12.861626] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:27.020 [2024-07-24 18:06:12.861674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.020 [2024-07-24 18:06:12.861694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.020 [2024-07-24 18:06:12.861712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.020 [2024-07-24 18:06:12.861724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.020 [2024-07-24 18:06:12.861737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.020 [2024-07-24 18:06:12.861750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.020 [2024-07-24 18:06:12.861763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.020 [2024-07-24 18:06:12.861776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.021 [2024-07-24 18:06:12.861795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.021 [2024-07-24 18:06:12.861808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.021 [2024-07-24 18:06:12.861820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:27.021 [2024-07-24 18:06:12.861924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186a780 (9): Bad file descriptor 00:23:27.021 [2024-07-24 18:06:12.862956] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:27.021 [2024-07-24 18:06:12.862978] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.021 18:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.021 18:06:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:27.021 18:06:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:27.962 18:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.897 [2024-07-24 18:06:14.914289] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.897 [2024-07-24 18:06:14.914326] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.897 [2024-07-24 18:06:14.914350] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.897 [2024-07-24 18:06:15.000655] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:28.897 18:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.154 [2024-07-24 18:06:15.185145] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.154 [2024-07-24 18:06:15.185211] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.155 [2024-07-24 18:06:15.185247] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.155 [2024-07-24 18:06:15.185270] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:29.155 [2024-07-24 18:06:15.185283] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.155 [2024-07-24 18:06:15.192543] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1871120 was disconnected and freed. delete nvme_qpair. 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2864983 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2864983 ']' 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2864983 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2864983 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2864983' 00:23:30.088 killing process with pid 2864983 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2864983 00:23:30.088 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2864983 00:23:30.345 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:30.345 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.346 rmmod nvme_tcp 00:23:30.346 rmmod nvme_fabrics 00:23:30.346 rmmod nvme_keyring 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2864862 ']' 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2864862 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2864862 ']' 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2864862 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2864862 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2864862' 00:23:30.346 killing process with pid 2864862 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2864862 00:23:30.346 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2864862 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.603 18:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.135 00:23:33.135 real 0m17.751s 00:23:33.135 user 0m25.815s 00:23:33.135 sys 0m3.040s 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.135 ************************************ 00:23:33.135 END TEST nvmf_discovery_remove_ifc 00:23:33.135 ************************************ 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.135 ************************************ 00:23:33.135 START TEST nvmf_identify_kernel_target 00:23:33.135 ************************************ 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.135 * Looking for test storage... 00:23:33.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.135 18:06:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.039 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:35.040 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:35.040 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:35.040 Found net devices under 0000:09:00.0: cvl_0_0 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:35.040 Found net devices under 0000:09:00.1: cvl_0_1 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:23:35.040 00:23:35.040 --- 10.0.0.2 ping statistics --- 00:23:35.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.040 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:23:35.040 00:23:35.040 --- 10.0.0.1 ping statistics --- 00:23:35.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.040 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:35.040 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:35.041 18:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:36.418 Waiting for block devices as requested 00:23:36.418 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:36.418 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:36.418 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:36.418 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:36.418 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:36.418 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:36.685 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:36.685 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:36.685 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:36.944 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:36.944 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:36.944 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:36.944 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:37.203 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:37.203 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:37.203 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:37.461 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:37.461 No valid GPT data, bailing 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:37.461 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:37.720 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:37.720 00:23:37.720 Discovery Log Number of Records 2, Generation counter 2 00:23:37.720 =====Discovery Log Entry 0====== 00:23:37.720 trtype: tcp 00:23:37.720 adrfam: ipv4 00:23:37.720 subtype: current discovery subsystem 00:23:37.720 treq: not specified, sq flow control disable supported 00:23:37.720 portid: 1 00:23:37.720 trsvcid: 4420 00:23:37.720 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:37.720 traddr: 10.0.0.1 00:23:37.720 eflags: none 00:23:37.720 sectype: none 00:23:37.720 =====Discovery Log Entry 1====== 00:23:37.720 trtype: tcp 00:23:37.720 adrfam: ipv4 00:23:37.720 subtype: nvme subsystem 00:23:37.720 treq: not specified, sq flow control disable supported 00:23:37.720 portid: 1 00:23:37.720 trsvcid: 4420 00:23:37.720 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:37.720 traddr: 10.0.0.1 00:23:37.720 eflags: none 00:23:37.720 sectype: none 00:23:37.720 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:37.720 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:37.720 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.720 ===================================================== 00:23:37.720 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:37.720 ===================================================== 00:23:37.721 Controller Capabilities/Features 00:23:37.721 ================================ 00:23:37.721 Vendor ID: 0000 00:23:37.721 Subsystem Vendor ID: 0000 00:23:37.721 Serial Number: 01b296f1b4cf8ce467ae 00:23:37.721 Model Number: Linux 00:23:37.721 Firmware Version: 6.7.0-68 00:23:37.721 Recommended Arb Burst: 0 00:23:37.721 IEEE OUI Identifier: 00 00 00 00:23:37.721 Multi-path I/O 00:23:37.721 May have multiple subsystem ports: No 00:23:37.721 May have multiple controllers: No 00:23:37.721 Associated with SR-IOV VF: No 00:23:37.721 Max Data Transfer Size: Unlimited 00:23:37.721 Max Number of Namespaces: 0 00:23:37.721 Max Number of I/O Queues: 1024 00:23:37.721 NVMe Specification Version (VS): 1.3 00:23:37.721 NVMe Specification Version (Identify): 1.3 00:23:37.721 Maximum Queue Entries: 1024 00:23:37.721 Contiguous Queues Required: No 00:23:37.721 Arbitration Mechanisms Supported 00:23:37.721 Weighted Round Robin: Not Supported 00:23:37.721 Vendor Specific: Not Supported 00:23:37.721 Reset Timeout: 7500 ms 00:23:37.721 Doorbell Stride: 4 bytes 00:23:37.721 NVM Subsystem Reset: Not Supported 00:23:37.721 Command Sets Supported 00:23:37.721 NVM Command Set: Supported 00:23:37.721 Boot Partition: Not Supported 00:23:37.721 Memory Page Size Minimum: 4096 bytes 00:23:37.721 Memory Page Size Maximum: 4096 bytes 00:23:37.721 Persistent Memory Region: Not Supported 00:23:37.721 Optional Asynchronous Events Supported 00:23:37.721 Namespace Attribute Notices: Not Supported 00:23:37.721 Firmware Activation Notices: Not Supported 00:23:37.721 ANA Change Notices: Not Supported 00:23:37.721 PLE Aggregate Log Change Notices: Not Supported 00:23:37.721 LBA Status Info Alert Notices: Not Supported 00:23:37.721 EGE Aggregate Log Change Notices: Not Supported 00:23:37.721 Normal NVM Subsystem Shutdown event: Not Supported 00:23:37.721 Zone Descriptor Change Notices: Not Supported 00:23:37.721 Discovery Log Change Notices: Supported 00:23:37.721 Controller Attributes 00:23:37.721 128-bit Host Identifier: Not Supported 00:23:37.721 Non-Operational Permissive Mode: Not Supported 00:23:37.721 NVM Sets: Not Supported 00:23:37.721 Read Recovery Levels: Not Supported 00:23:37.721 Endurance Groups: Not Supported 00:23:37.721 Predictable Latency Mode: Not Supported 00:23:37.721 Traffic Based Keep ALive: Not Supported 00:23:37.721 Namespace Granularity: Not Supported 00:23:37.721 SQ Associations: Not Supported 00:23:37.721 UUID List: Not Supported 00:23:37.721 Multi-Domain Subsystem: Not Supported 00:23:37.721 Fixed Capacity Management: Not Supported 00:23:37.721 Variable Capacity Management: Not Supported 00:23:37.721 Delete Endurance Group: Not Supported 00:23:37.721 Delete NVM Set: Not Supported 00:23:37.721 Extended LBA Formats Supported: Not Supported 00:23:37.721 Flexible Data Placement Supported: Not Supported 00:23:37.721 00:23:37.721 Controller Memory Buffer Support 00:23:37.721 ================================ 00:23:37.721 Supported: No 00:23:37.721 00:23:37.721 Persistent Memory Region Support 00:23:37.721 ================================ 00:23:37.721 Supported: No 00:23:37.721 00:23:37.721 Admin Command Set Attributes 00:23:37.721 ============================ 00:23:37.721 Security Send/Receive: Not Supported 00:23:37.721 Format NVM: Not Supported 00:23:37.721 Firmware Activate/Download: Not Supported 00:23:37.721 Namespace Management: Not Supported 00:23:37.721 Device Self-Test: Not Supported 00:23:37.721 Directives: Not Supported 00:23:37.721 NVMe-MI: Not Supported 00:23:37.721 Virtualization Management: Not Supported 00:23:37.721 Doorbell Buffer Config: Not Supported 00:23:37.721 Get LBA Status Capability: Not Supported 00:23:37.721 Command & Feature Lockdown Capability: Not Supported 00:23:37.721 Abort Command Limit: 1 00:23:37.721 Async Event Request Limit: 1 00:23:37.721 Number of Firmware Slots: N/A 00:23:37.721 Firmware Slot 1 Read-Only: N/A 00:23:37.721 Firmware Activation Without Reset: N/A 00:23:37.721 Multiple Update Detection Support: N/A 00:23:37.721 Firmware Update Granularity: No Information Provided 00:23:37.721 Per-Namespace SMART Log: No 00:23:37.721 Asymmetric Namespace Access Log Page: Not Supported 00:23:37.721 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:37.721 Command Effects Log Page: Not Supported 00:23:37.721 Get Log Page Extended Data: Supported 00:23:37.721 Telemetry Log Pages: Not Supported 00:23:37.721 Persistent Event Log Pages: Not Supported 00:23:37.721 Supported Log Pages Log Page: May Support 00:23:37.721 Commands Supported & Effects Log Page: Not Supported 00:23:37.721 Feature Identifiers & Effects Log Page:May Support 00:23:37.721 NVMe-MI Commands & Effects Log Page: May Support 00:23:37.721 Data Area 4 for Telemetry Log: Not Supported 00:23:37.721 Error Log Page Entries Supported: 1 00:23:37.721 Keep Alive: Not Supported 00:23:37.721 00:23:37.721 NVM Command Set Attributes 00:23:37.721 ========================== 00:23:37.721 Submission Queue Entry Size 00:23:37.721 Max: 1 00:23:37.721 Min: 1 00:23:37.721 Completion Queue Entry Size 00:23:37.721 Max: 1 00:23:37.721 Min: 1 00:23:37.721 Number of Namespaces: 0 00:23:37.721 Compare Command: Not Supported 00:23:37.721 Write Uncorrectable Command: Not Supported 00:23:37.721 Dataset Management Command: Not Supported 00:23:37.721 Write Zeroes Command: Not Supported 00:23:37.721 Set Features Save Field: Not Supported 00:23:37.721 Reservations: Not Supported 00:23:37.721 Timestamp: Not Supported 00:23:37.721 Copy: Not Supported 00:23:37.721 Volatile Write Cache: Not Present 00:23:37.721 Atomic Write Unit (Normal): 1 00:23:37.721 Atomic Write Unit (PFail): 1 00:23:37.721 Atomic Compare & Write Unit: 1 00:23:37.721 Fused Compare & Write: Not Supported 00:23:37.721 Scatter-Gather List 00:23:37.721 SGL Command Set: Supported 00:23:37.721 SGL Keyed: Not Supported 00:23:37.721 SGL Bit Bucket Descriptor: Not Supported 00:23:37.721 SGL Metadata Pointer: Not Supported 00:23:37.721 Oversized SGL: Not Supported 00:23:37.721 SGL Metadata Address: Not Supported 00:23:37.721 SGL Offset: Supported 00:23:37.721 Transport SGL Data Block: Not Supported 00:23:37.721 Replay Protected Memory Block: Not Supported 00:23:37.721 00:23:37.721 Firmware Slot Information 00:23:37.721 ========================= 00:23:37.721 Active slot: 0 00:23:37.721 00:23:37.721 00:23:37.721 Error Log 00:23:37.721 ========= 00:23:37.721 00:23:37.721 Active Namespaces 00:23:37.721 ================= 00:23:37.721 Discovery Log Page 00:23:37.721 ================== 00:23:37.721 Generation Counter: 2 00:23:37.721 Number of Records: 2 00:23:37.721 Record Format: 0 00:23:37.721 00:23:37.721 Discovery Log Entry 0 00:23:37.721 ---------------------- 00:23:37.721 Transport Type: 3 (TCP) 00:23:37.721 Address Family: 1 (IPv4) 00:23:37.721 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:37.721 Entry Flags: 00:23:37.721 Duplicate Returned Information: 0 00:23:37.721 Explicit Persistent Connection Support for Discovery: 0 00:23:37.721 Transport Requirements: 00:23:37.721 Secure Channel: Not Specified 00:23:37.721 Port ID: 1 (0x0001) 00:23:37.721 Controller ID: 65535 (0xffff) 00:23:37.721 Admin Max SQ Size: 32 00:23:37.721 Transport Service Identifier: 4420 00:23:37.721 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:37.721 Transport Address: 10.0.0.1 00:23:37.721 Discovery Log Entry 1 00:23:37.721 ---------------------- 00:23:37.721 Transport Type: 3 (TCP) 00:23:37.721 Address Family: 1 (IPv4) 00:23:37.721 Subsystem Type: 2 (NVM Subsystem) 00:23:37.721 Entry Flags: 00:23:37.721 Duplicate Returned Information: 0 00:23:37.721 Explicit Persistent Connection Support for Discovery: 0 00:23:37.721 Transport Requirements: 00:23:37.721 Secure Channel: Not Specified 00:23:37.721 Port ID: 1 (0x0001) 00:23:37.721 Controller ID: 65535 (0xffff) 00:23:37.721 Admin Max SQ Size: 32 00:23:37.721 Transport Service Identifier: 4420 00:23:37.721 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:37.721 Transport Address: 10.0.0.1 00:23:37.722 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:37.722 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.722 get_feature(0x01) failed 00:23:37.722 get_feature(0x02) failed 00:23:37.722 get_feature(0x04) failed 00:23:37.722 ===================================================== 00:23:37.722 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:37.722 ===================================================== 00:23:37.722 Controller Capabilities/Features 00:23:37.722 ================================ 00:23:37.722 Vendor ID: 0000 00:23:37.722 Subsystem Vendor ID: 0000 00:23:37.722 Serial Number: 6caecd8be51422dbd49c 00:23:37.722 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:37.722 Firmware Version: 6.7.0-68 00:23:37.722 Recommended Arb Burst: 6 00:23:37.722 IEEE OUI Identifier: 00 00 00 00:23:37.722 Multi-path I/O 00:23:37.722 May have multiple subsystem ports: Yes 00:23:37.722 May have multiple controllers: Yes 00:23:37.722 Associated with SR-IOV VF: No 00:23:37.722 Max Data Transfer Size: Unlimited 00:23:37.722 Max Number of Namespaces: 1024 00:23:37.722 Max Number of I/O Queues: 128 00:23:37.722 NVMe Specification Version (VS): 1.3 00:23:37.722 NVMe Specification Version (Identify): 1.3 00:23:37.722 Maximum Queue Entries: 1024 00:23:37.722 Contiguous Queues Required: No 00:23:37.722 Arbitration Mechanisms Supported 00:23:37.722 Weighted Round Robin: Not Supported 00:23:37.722 Vendor Specific: Not Supported 00:23:37.722 Reset Timeout: 7500 ms 00:23:37.722 Doorbell Stride: 4 bytes 00:23:37.722 NVM Subsystem Reset: Not Supported 00:23:37.722 Command Sets Supported 00:23:37.722 NVM Command Set: Supported 00:23:37.722 Boot Partition: Not Supported 00:23:37.722 Memory Page Size Minimum: 4096 bytes 00:23:37.722 Memory Page Size Maximum: 4096 bytes 00:23:37.722 Persistent Memory Region: Not Supported 00:23:37.722 Optional Asynchronous Events Supported 00:23:37.722 Namespace Attribute Notices: Supported 00:23:37.722 Firmware Activation Notices: Not Supported 00:23:37.722 ANA Change Notices: Supported 00:23:37.722 PLE Aggregate Log Change Notices: Not Supported 00:23:37.722 LBA Status Info Alert Notices: Not Supported 00:23:37.722 EGE Aggregate Log Change Notices: Not Supported 00:23:37.722 Normal NVM Subsystem Shutdown event: Not Supported 00:23:37.722 Zone Descriptor Change Notices: Not Supported 00:23:37.722 Discovery Log Change Notices: Not Supported 00:23:37.722 Controller Attributes 00:23:37.722 128-bit Host Identifier: Supported 00:23:37.722 Non-Operational Permissive Mode: Not Supported 00:23:37.722 NVM Sets: Not Supported 00:23:37.722 Read Recovery Levels: Not Supported 00:23:37.722 Endurance Groups: Not Supported 00:23:37.722 Predictable Latency Mode: Not Supported 00:23:37.722 Traffic Based Keep ALive: Supported 00:23:37.722 Namespace Granularity: Not Supported 00:23:37.722 SQ Associations: Not Supported 00:23:37.722 UUID List: Not Supported 00:23:37.722 Multi-Domain Subsystem: Not Supported 00:23:37.722 Fixed Capacity Management: Not Supported 00:23:37.722 Variable Capacity Management: Not Supported 00:23:37.722 Delete Endurance Group: Not Supported 00:23:37.722 Delete NVM Set: Not Supported 00:23:37.722 Extended LBA Formats Supported: Not Supported 00:23:37.722 Flexible Data Placement Supported: Not Supported 00:23:37.722 00:23:37.722 Controller Memory Buffer Support 00:23:37.722 ================================ 00:23:37.722 Supported: No 00:23:37.722 00:23:37.722 Persistent Memory Region Support 00:23:37.722 ================================ 00:23:37.722 Supported: No 00:23:37.722 00:23:37.722 Admin Command Set Attributes 00:23:37.722 ============================ 00:23:37.722 Security Send/Receive: Not Supported 00:23:37.722 Format NVM: Not Supported 00:23:37.722 Firmware Activate/Download: Not Supported 00:23:37.722 Namespace Management: Not Supported 00:23:37.722 Device Self-Test: Not Supported 00:23:37.722 Directives: Not Supported 00:23:37.722 NVMe-MI: Not Supported 00:23:37.722 Virtualization Management: Not Supported 00:23:37.722 Doorbell Buffer Config: Not Supported 00:23:37.722 Get LBA Status Capability: Not Supported 00:23:37.722 Command & Feature Lockdown Capability: Not Supported 00:23:37.722 Abort Command Limit: 4 00:23:37.722 Async Event Request Limit: 4 00:23:37.722 Number of Firmware Slots: N/A 00:23:37.722 Firmware Slot 1 Read-Only: N/A 00:23:37.722 Firmware Activation Without Reset: N/A 00:23:37.722 Multiple Update Detection Support: N/A 00:23:37.722 Firmware Update Granularity: No Information Provided 00:23:37.722 Per-Namespace SMART Log: Yes 00:23:37.722 Asymmetric Namespace Access Log Page: Supported 00:23:37.722 ANA Transition Time : 10 sec 00:23:37.722 00:23:37.722 Asymmetric Namespace Access Capabilities 00:23:37.722 ANA Optimized State : Supported 00:23:37.722 ANA Non-Optimized State : Supported 00:23:37.722 ANA Inaccessible State : Supported 00:23:37.722 ANA Persistent Loss State : Supported 00:23:37.722 ANA Change State : Supported 00:23:37.722 ANAGRPID is not changed : No 00:23:37.722 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:37.722 00:23:37.722 ANA Group Identifier Maximum : 128 00:23:37.722 Number of ANA Group Identifiers : 128 00:23:37.722 Max Number of Allowed Namespaces : 1024 00:23:37.722 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:37.722 Command Effects Log Page: Supported 00:23:37.722 Get Log Page Extended Data: Supported 00:23:37.722 Telemetry Log Pages: Not Supported 00:23:37.722 Persistent Event Log Pages: Not Supported 00:23:37.722 Supported Log Pages Log Page: May Support 00:23:37.722 Commands Supported & Effects Log Page: Not Supported 00:23:37.722 Feature Identifiers & Effects Log Page:May Support 00:23:37.722 NVMe-MI Commands & Effects Log Page: May Support 00:23:37.722 Data Area 4 for Telemetry Log: Not Supported 00:23:37.722 Error Log Page Entries Supported: 128 00:23:37.722 Keep Alive: Supported 00:23:37.722 Keep Alive Granularity: 1000 ms 00:23:37.722 00:23:37.722 NVM Command Set Attributes 00:23:37.722 ========================== 00:23:37.722 Submission Queue Entry Size 00:23:37.722 Max: 64 00:23:37.722 Min: 64 00:23:37.722 Completion Queue Entry Size 00:23:37.722 Max: 16 00:23:37.722 Min: 16 00:23:37.722 Number of Namespaces: 1024 00:23:37.722 Compare Command: Not Supported 00:23:37.722 Write Uncorrectable Command: Not Supported 00:23:37.722 Dataset Management Command: Supported 00:23:37.722 Write Zeroes Command: Supported 00:23:37.722 Set Features Save Field: Not Supported 00:23:37.722 Reservations: Not Supported 00:23:37.722 Timestamp: Not Supported 00:23:37.722 Copy: Not Supported 00:23:37.722 Volatile Write Cache: Present 00:23:37.722 Atomic Write Unit (Normal): 1 00:23:37.722 Atomic Write Unit (PFail): 1 00:23:37.722 Atomic Compare & Write Unit: 1 00:23:37.722 Fused Compare & Write: Not Supported 00:23:37.722 Scatter-Gather List 00:23:37.722 SGL Command Set: Supported 00:23:37.722 SGL Keyed: Not Supported 00:23:37.722 SGL Bit Bucket Descriptor: Not Supported 00:23:37.722 SGL Metadata Pointer: Not Supported 00:23:37.722 Oversized SGL: Not Supported 00:23:37.722 SGL Metadata Address: Not Supported 00:23:37.722 SGL Offset: Supported 00:23:37.722 Transport SGL Data Block: Not Supported 00:23:37.722 Replay Protected Memory Block: Not Supported 00:23:37.722 00:23:37.722 Firmware Slot Information 00:23:37.722 ========================= 00:23:37.722 Active slot: 0 00:23:37.722 00:23:37.722 Asymmetric Namespace Access 00:23:37.722 =========================== 00:23:37.722 Change Count : 0 00:23:37.722 Number of ANA Group Descriptors : 1 00:23:37.722 ANA Group Descriptor : 0 00:23:37.722 ANA Group ID : 1 00:23:37.722 Number of NSID Values : 1 00:23:37.722 Change Count : 0 00:23:37.722 ANA State : 1 00:23:37.722 Namespace Identifier : 1 00:23:37.722 00:23:37.722 Commands Supported and Effects 00:23:37.722 ============================== 00:23:37.722 Admin Commands 00:23:37.722 -------------- 00:23:37.722 Get Log Page (02h): Supported 00:23:37.722 Identify (06h): Supported 00:23:37.722 Abort (08h): Supported 00:23:37.722 Set Features (09h): Supported 00:23:37.722 Get Features (0Ah): Supported 00:23:37.722 Asynchronous Event Request (0Ch): Supported 00:23:37.722 Keep Alive (18h): Supported 00:23:37.722 I/O Commands 00:23:37.722 ------------ 00:23:37.722 Flush (00h): Supported 00:23:37.722 Write (01h): Supported LBA-Change 00:23:37.722 Read (02h): Supported 00:23:37.722 Write Zeroes (08h): Supported LBA-Change 00:23:37.722 Dataset Management (09h): Supported 00:23:37.722 00:23:37.722 Error Log 00:23:37.723 ========= 00:23:37.723 Entry: 0 00:23:37.723 Error Count: 0x3 00:23:37.723 Submission Queue Id: 0x0 00:23:37.723 Command Id: 0x5 00:23:37.723 Phase Bit: 0 00:23:37.723 Status Code: 0x2 00:23:37.723 Status Code Type: 0x0 00:23:37.723 Do Not Retry: 1 00:23:37.981 Error Location: 0x28 00:23:37.981 LBA: 0x0 00:23:37.981 Namespace: 0x0 00:23:37.981 Vendor Log Page: 0x0 00:23:37.981 ----------- 00:23:37.981 Entry: 1 00:23:37.981 Error Count: 0x2 00:23:37.981 Submission Queue Id: 0x0 00:23:37.981 Command Id: 0x5 00:23:37.981 Phase Bit: 0 00:23:37.981 Status Code: 0x2 00:23:37.981 Status Code Type: 0x0 00:23:37.981 Do Not Retry: 1 00:23:37.981 Error Location: 0x28 00:23:37.981 LBA: 0x0 00:23:37.981 Namespace: 0x0 00:23:37.981 Vendor Log Page: 0x0 00:23:37.981 ----------- 00:23:37.981 Entry: 2 00:23:37.981 Error Count: 0x1 00:23:37.981 Submission Queue Id: 0x0 00:23:37.981 Command Id: 0x4 00:23:37.981 Phase Bit: 0 00:23:37.981 Status Code: 0x2 00:23:37.981 Status Code Type: 0x0 00:23:37.981 Do Not Retry: 1 00:23:37.981 Error Location: 0x28 00:23:37.981 LBA: 0x0 00:23:37.981 Namespace: 0x0 00:23:37.981 Vendor Log Page: 0x0 00:23:37.981 00:23:37.981 Number of Queues 00:23:37.981 ================ 00:23:37.982 Number of I/O Submission Queues: 128 00:23:37.982 Number of I/O Completion Queues: 128 00:23:37.982 00:23:37.982 ZNS Specific Controller Data 00:23:37.982 ============================ 00:23:37.982 Zone Append Size Limit: 0 00:23:37.982 00:23:37.982 00:23:37.982 Active Namespaces 00:23:37.982 ================= 00:23:37.982 get_feature(0x05) failed 00:23:37.982 Namespace ID:1 00:23:37.982 Command Set Identifier: NVM (00h) 00:23:37.982 Deallocate: Supported 00:23:37.982 Deallocated/Unwritten Error: Not Supported 00:23:37.982 Deallocated Read Value: Unknown 00:23:37.982 Deallocate in Write Zeroes: Not Supported 00:23:37.982 Deallocated Guard Field: 0xFFFF 00:23:37.982 Flush: Supported 00:23:37.982 Reservation: Not Supported 00:23:37.982 Namespace Sharing Capabilities: Multiple Controllers 00:23:37.982 Size (in LBAs): 1953525168 (931GiB) 00:23:37.982 Capacity (in LBAs): 1953525168 (931GiB) 00:23:37.982 Utilization (in LBAs): 1953525168 (931GiB) 00:23:37.982 UUID: ebcff9c8-2acf-4a9e-98bf-43583ba44f0d 00:23:37.982 Thin Provisioning: Not Supported 00:23:37.982 Per-NS Atomic Units: Yes 00:23:37.982 Atomic Boundary Size (Normal): 0 00:23:37.982 Atomic Boundary Size (PFail): 0 00:23:37.982 Atomic Boundary Offset: 0 00:23:37.982 NGUID/EUI64 Never Reused: No 00:23:37.982 ANA group ID: 1 00:23:37.982 Namespace Write Protected: No 00:23:37.982 Number of LBA Formats: 1 00:23:37.982 Current LBA Format: LBA Format #00 00:23:37.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:37.982 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.982 18:06:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.982 rmmod nvme_tcp 00:23:37.982 rmmod nvme_fabrics 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.982 18:06:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:39.886 18:06:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:41.260 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:41.260 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:41.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:42.197 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:42.197 00:23:42.197 real 0m9.571s 00:23:42.197 user 0m1.957s 00:23:42.197 sys 0m3.497s 00:23:42.197 18:06:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:42.197 18:06:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.197 ************************************ 00:23:42.197 END TEST nvmf_identify_kernel_target 00:23:42.197 ************************************ 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.456 ************************************ 00:23:42.456 START TEST nvmf_auth_host 00:23:42.456 ************************************ 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:42.456 * Looking for test storage... 00:23:42.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.456 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.457 18:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:44.357 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.357 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:44.358 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:44.358 Found net devices under 0000:09:00.0: cvl_0_0 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:44.358 Found net devices under 0000:09:00.1: cvl_0_1 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.358 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:44.617 00:23:44.617 --- 10.0.0.2 ping statistics --- 00:23:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.617 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:44.617 00:23:44.617 --- 10.0.0.1 ping statistics --- 00:23:44.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.617 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2872602 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2872602 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2872602 ']' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.617 18:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=60a418e419551d4c4c1466cde1e3792b 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gxt 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 60a418e419551d4c4c1466cde1e3792b 0 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 60a418e419551d4c4c1466cde1e3792b 0 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=60a418e419551d4c4c1466cde1e3792b 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gxt 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gxt 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gxt 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:44.876 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d5a1b1629707422bad4dc8f957adfe5ac4d80ca2015c17bdc345cc5540386ee1 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Ao3 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d5a1b1629707422bad4dc8f957adfe5ac4d80ca2015c17bdc345cc5540386ee1 3 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d5a1b1629707422bad4dc8f957adfe5ac4d80ca2015c17bdc345cc5540386ee1 3 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.134 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d5a1b1629707422bad4dc8f957adfe5ac4d80ca2015c17bdc345cc5540386ee1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Ao3 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Ao3 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ao3 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df153a6ad0fec31119b5b32b02a4388a1a3650b54dc7532a 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Adn 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df153a6ad0fec31119b5b32b02a4388a1a3650b54dc7532a 0 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df153a6ad0fec31119b5b32b02a4388a1a3650b54dc7532a 0 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df153a6ad0fec31119b5b32b02a4388a1a3650b54dc7532a 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Adn 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Adn 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Adn 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5c9b6fc669fca558adb8ce0d976a8d6a71e8e199003b096 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oyf 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5c9b6fc669fca558adb8ce0d976a8d6a71e8e199003b096 2 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5c9b6fc669fca558adb8ce0d976a8d6a71e8e199003b096 2 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5c9b6fc669fca558adb8ce0d976a8d6a71e8e199003b096 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oyf 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oyf 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oyf 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3cca2b4a6984d189190019fe5a4052b 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JuB 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3cca2b4a6984d189190019fe5a4052b 1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3cca2b4a6984d189190019fe5a4052b 1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3cca2b4a6984d189190019fe5a4052b 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JuB 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JuB 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.JuB 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a4100ba0b8b7f77ac5a9e3a2d4bbb5aa 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pnf 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a4100ba0b8b7f77ac5a9e3a2d4bbb5aa 1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a4100ba0b8b7f77ac5a9e3a2d4bbb5aa 1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a4100ba0b8b7f77ac5a9e3a2d4bbb5aa 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.135 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pnf 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pnf 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.pnf 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f4b5214c1764aa543c272982e35ab5928e1b82adbb20771 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.46h 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f4b5214c1764aa543c272982e35ab5928e1b82adbb20771 2 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f4b5214c1764aa543c272982e35ab5928e1b82adbb20771 2 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f4b5214c1764aa543c272982e35ab5928e1b82adbb20771 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.46h 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.46h 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.46h 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8e4c20085332cfe81ae22dcb1f99561 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8zO 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8e4c20085332cfe81ae22dcb1f99561 0 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8e4c20085332cfe81ae22dcb1f99561 0 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8e4c20085332cfe81ae22dcb1f99561 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8zO 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8zO 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8zO 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9fe3c540549c97fbd62d626794348f074db1b24ae9c3952d58ac38894b1d4c6 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zpN 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9fe3c540549c97fbd62d626794348f074db1b24ae9c3952d58ac38894b1d4c6 3 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9fe3c540549c97fbd62d626794348f074db1b24ae9c3952d58ac38894b1d4c6 3 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9fe3c540549c97fbd62d626794348f074db1b24ae9c3952d58ac38894b1d4c6 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zpN 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zpN 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zpN 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2872602 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2872602 ']' 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.394 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gxt 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ao3 ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ao3 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Adn 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oyf ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oyf 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JuB 00:23:45.653 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.pnf ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pnf 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.46h 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8zO ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8zO 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zpN 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:45.654 18:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:47.029 Waiting for block devices as requested 00:23:47.029 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.029 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.029 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.029 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.287 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.287 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.287 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:47.287 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:47.547 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.547 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.547 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.804 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.804 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.804 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.804 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:48.062 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:48.062 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:48.629 No valid GPT data, bailing 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:48.629 00:23:48.629 Discovery Log Number of Records 2, Generation counter 2 00:23:48.629 =====Discovery Log Entry 0====== 00:23:48.629 trtype: tcp 00:23:48.629 adrfam: ipv4 00:23:48.629 subtype: current discovery subsystem 00:23:48.629 treq: not specified, sq flow control disable supported 00:23:48.629 portid: 1 00:23:48.629 trsvcid: 4420 00:23:48.629 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:48.629 traddr: 10.0.0.1 00:23:48.629 eflags: none 00:23:48.629 sectype: none 00:23:48.629 =====Discovery Log Entry 1====== 00:23:48.629 trtype: tcp 00:23:48.629 adrfam: ipv4 00:23:48.629 subtype: nvme subsystem 00:23:48.629 treq: not specified, sq flow control disable supported 00:23:48.629 portid: 1 00:23:48.629 trsvcid: 4420 00:23:48.629 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:48.629 traddr: 10.0.0.1 00:23:48.629 eflags: none 00:23:48.629 sectype: none 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.629 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.888 nvme0n1 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.888 18:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:48.888 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.889 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.148 nvme0n1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.148 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 nvme0n1 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.407 nvme0n1 00:23:49.407 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:49.665 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.666 nvme0n1 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.666 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.924 18:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.924 nvme0n1 00:23:49.924 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.924 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.925 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.183 nvme0n1 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.183 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.441 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.442 nvme0n1 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.442 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.700 nvme0n1 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.700 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.701 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.960 18:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.960 nvme0n1 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.960 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.218 nvme0n1 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.218 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.476 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.477 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 nvme0n1 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.735 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.736 18:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 nvme0n1 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.252 nvme0n1 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.252 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.511 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.769 nvme0n1 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.769 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.770 18:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.027 nvme0n1 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.027 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.028 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.592 nvme0n1 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.592 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.850 18:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 nvme0n1 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.417 18:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.982 nvme0n1 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.982 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.983 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.549 nvme0n1 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.549 18:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.114 nvme0n1 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.114 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.115 18:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.049 nvme0n1 00:23:57.049 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.049 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.049 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.049 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.049 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.306 18:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 nvme0n1 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.239 18:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.174 nvme0n1 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.174 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.432 18:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 nvme0n1 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.367 18:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.300 nvme0n1 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.301 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 nvme0n1 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.559 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.560 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 nvme0n1 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.818 18:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 nvme0n1 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.077 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.077 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.078 nvme0n1 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.078 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.340 nvme0n1 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.340 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.341 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.599 nvme0n1 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.599 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.600 18:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.859 nvme0n1 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.859 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.118 nvme0n1 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.118 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.377 nvme0n1 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.377 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.635 nvme0n1 00:24:03.635 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.635 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.635 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.635 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.636 18:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 nvme0n1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.203 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.462 nvme0n1 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.462 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.720 nvme0n1 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.720 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.721 18:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.287 nvme0n1 00:24:05.287 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.287 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.287 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.287 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.288 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.546 nvme0n1 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:05.546 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.547 18:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.112 nvme0n1 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:06.112 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.113 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.679 nvme0n1 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.679 18:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.249 nvme0n1 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.249 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.508 18:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.075 nvme0n1 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.075 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.641 nvme0n1 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.641 18:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.575 nvme0n1 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.575 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.576 18:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.509 nvme0n1 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.509 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.510 18:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.442 nvme0n1 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.442 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:11.700 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.701 18:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.634 nvme0n1 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.634 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.635 18:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.568 nvme0n1 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.569 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 nvme0n1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.828 18:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.086 nvme0n1 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.087 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.345 nvme0n1 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.345 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.346 nvme0n1 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.346 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.604 nvme0n1 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.604 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.863 18:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.863 nvme0n1 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.863 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.122 nvme0n1 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.122 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.123 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.408 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.409 nvme0n1 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.409 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.667 nvme0n1 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.667 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.668 18:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.926 nvme0n1 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.926 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.185 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.443 nvme0n1 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.443 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.444 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.444 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.444 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.444 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.702 nvme0n1 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.702 18:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 nvme0n1 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.961 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.218 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.219 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.219 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 nvme0n1 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.475 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.733 nvme0n1 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.733 18:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.299 nvme0n1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.299 18:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.865 nvme0n1 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.865 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.123 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.381 nvme0n1 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.381 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.639 18:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.205 nvme0n1 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.205 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.206 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.771 nvme0n1 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjBhNDE4ZTQxOTU1MWQ0YzRjMTQ2NmNkZTFlMzc5MmLuwUGE: 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDVhMWIxNjI5NzA3NDIyYmFkNGRjOGY5NTdhZGZlNWFjNGQ4MGNhMjAxNWMxN2JkYzM0NWNjNTU0MDM4NmVlMVhW6Bo=: 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.771 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.772 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.772 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.772 18:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.704 nvme0n1 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:21.704 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.705 18:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 nvme0n1 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTNjY2EyYjRhNjk4NGQxODkxOTAwMTlmZTVhNDA1MmJg/sR8: 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQxMDBiYTBiOGI3Zjc3YWM1YTllM2EyZDRiYmI1YWG3Kh5R: 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.077 18:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.642 nvme0n1 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.643 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGY0YjUyMTRjMTc2NGFhNTQzYzI3Mjk4MmUzNWFiNTkyOGUxYjgyYWRiYjIwNzcxcD3QIA==: 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThlNGMyMDA4NTMzMmNmZTgxYWUyMmRjYjFmOTk1NjGJYQMa: 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.900 18:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.832 nvme0n1 00:24:24.832 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.832 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.832 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTlmZTNjNTQwNTQ5Yzk3ZmJkNjJkNjI2Nzk0MzQ4ZjA3NGRiMWIyNGFlOWMzOTUyZDU4YWMzODg5NGIxZDRjNopIU34=: 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:24.833 18:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.833 18:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.766 nvme0n1 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.766 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGYxNTNhNmFkMGZlYzMxMTE5YjViMzJiMDJhNDM4OGExYTM2NTBiNTRkYzc1MzJhIrsLbw==: 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjVjOWI2ZmM2NjlmY2E1NThhZGI4Y2UwZDk3NmE4ZDZhNzFlOGUxOTkwMDNiMDk2/K0vRw==: 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.025 request: 00:24:26.025 { 00:24:26.025 "name": "nvme0", 00:24:26.025 "trtype": "tcp", 00:24:26.025 "traddr": "10.0.0.1", 00:24:26.025 "adrfam": "ipv4", 00:24:26.025 "trsvcid": "4420", 00:24:26.025 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:26.025 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:26.025 "prchk_reftag": false, 00:24:26.025 "prchk_guard": false, 00:24:26.025 "hdgst": false, 00:24:26.025 "ddgst": false, 00:24:26.025 "method": "bdev_nvme_attach_controller", 00:24:26.025 "req_id": 1 00:24:26.025 } 00:24:26.025 Got JSON-RPC error response 00:24:26.025 response: 00:24:26.025 { 00:24:26.025 "code": -5, 00:24:26.025 "message": "Input/output error" 00:24:26.025 } 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.025 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.026 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.284 request: 00:24:26.284 { 00:24:26.284 "name": "nvme0", 00:24:26.284 "trtype": "tcp", 00:24:26.284 "traddr": "10.0.0.1", 00:24:26.284 "adrfam": "ipv4", 00:24:26.284 "trsvcid": "4420", 00:24:26.284 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:26.284 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:26.284 "prchk_reftag": false, 00:24:26.284 "prchk_guard": false, 00:24:26.284 "hdgst": false, 00:24:26.284 "ddgst": false, 00:24:26.284 "dhchap_key": "key2", 00:24:26.284 "method": "bdev_nvme_attach_controller", 00:24:26.284 "req_id": 1 00:24:26.284 } 00:24:26.284 Got JSON-RPC error response 00:24:26.284 response: 00:24:26.284 { 00:24:26.284 "code": -5, 00:24:26.284 "message": "Input/output error" 00:24:26.284 } 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.284 request: 00:24:26.284 { 00:24:26.284 "name": "nvme0", 00:24:26.284 "trtype": "tcp", 00:24:26.284 "traddr": "10.0.0.1", 00:24:26.284 "adrfam": "ipv4", 00:24:26.284 "trsvcid": "4420", 00:24:26.284 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:26.284 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:26.284 "prchk_reftag": false, 00:24:26.284 "prchk_guard": false, 00:24:26.284 "hdgst": false, 00:24:26.284 "ddgst": false, 00:24:26.284 "dhchap_key": "key1", 00:24:26.284 "dhchap_ctrlr_key": "ckey2", 00:24:26.284 "method": "bdev_nvme_attach_controller", 00:24:26.284 "req_id": 1 00:24:26.284 } 00:24:26.284 Got JSON-RPC error response 00:24:26.284 response: 00:24:26.284 { 00:24:26.284 "code": -5, 00:24:26.284 "message": "Input/output error" 00:24:26.284 } 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.284 rmmod nvme_tcp 00:24:26.284 rmmod nvme_fabrics 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2872602 ']' 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2872602 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2872602 ']' 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2872602 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872602 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:26.284 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:26.285 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872602' 00:24:26.285 killing process with pid 2872602 00:24:26.285 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2872602 00:24:26.285 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2872602 00:24:26.542 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.542 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.542 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.542 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.543 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.543 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.543 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.543 18:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:29.072 18:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:30.006 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:30.006 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:30.006 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:30.939 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:30.939 18:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gxt /tmp/spdk.key-null.Adn /tmp/spdk.key-sha256.JuB /tmp/spdk.key-sha384.46h /tmp/spdk.key-sha512.zpN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:30.939 18:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:32.314 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:32.314 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:32.314 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:32.314 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:32.314 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:32.314 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:32.314 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:32.314 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:32.314 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:32.314 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:32.314 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:32.314 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:32.314 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:32.314 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:32.314 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:32.314 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:32.314 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:32.314 00:24:32.314 real 0m49.921s 00:24:32.314 user 0m47.342s 00:24:32.314 sys 0m5.905s 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.314 ************************************ 00:24:32.314 END TEST nvmf_auth_host 00:24:32.314 ************************************ 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.314 ************************************ 00:24:32.314 START TEST nvmf_digest 00:24:32.314 ************************************ 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:32.314 * Looking for test storage... 00:24:32.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.314 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.315 18:07:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.215 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:34.216 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:34.216 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:34.216 Found net devices under 0000:09:00.0: cvl_0_0 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:34.216 Found net devices under 0000:09:00.1: cvl_0_1 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.216 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:24:34.476 00:24:34.476 --- 10.0.0.2 ping statistics --- 00:24:34.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.476 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:24:34.476 00:24:34.476 --- 10.0.0.1 ping statistics --- 00:24:34.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.476 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.476 ************************************ 00:24:34.476 START TEST nvmf_digest_clean 00:24:34.476 ************************************ 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:34.476 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2882073 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2882073 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2882073 ']' 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.477 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.477 [2024-07-24 18:07:20.717075] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:34.477 [2024-07-24 18:07:20.717188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.735 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.735 [2024-07-24 18:07:20.781790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.735 [2024-07-24 18:07:20.886446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.735 [2024-07-24 18:07:20.886498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.735 [2024-07-24 18:07:20.886522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.735 [2024-07-24 18:07:20.886534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.735 [2024-07-24 18:07:20.886544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.735 [2024-07-24 18:07:20.886588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.735 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:34.736 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:34.736 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:34.736 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.736 18:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.994 null0 00:24:34.994 [2024-07-24 18:07:21.057995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.994 [2024-07-24 18:07:21.082268] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2882094 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2882094 /var/tmp/bperf.sock 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2882094 ']' 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.994 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:34.994 [2024-07-24 18:07:21.129489] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:34.994 [2024-07-24 18:07:21.129550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882094 ] 00:24:34.994 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.994 [2024-07-24 18:07:21.190418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.253 [2024-07-24 18:07:21.309713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.253 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.253 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:35.253 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:35.253 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:35.253 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:35.540 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.540 18:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.106 nvme0n1 00:24:36.106 18:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:36.106 18:07:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.106 Running I/O for 2 seconds... 00:24:38.633 00:24:38.633 Latency(us) 00:24:38.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.633 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:38.633 nvme0n1 : 2.00 19402.75 75.79 0.00 0.00 6587.30 3616.62 16796.63 00:24:38.633 =================================================================================================================== 00:24:38.633 Total : 19402.75 75.79 0.00 0.00 6587.30 3616.62 16796.63 00:24:38.633 0 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:38.633 | select(.opcode=="crc32c") 00:24:38.633 | "\(.module_name) \(.executed)"' 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:38.633 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2882094 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2882094 ']' 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2882094 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882094 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882094' 00:24:38.634 killing process with pid 2882094 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2882094 00:24:38.634 Received shutdown signal, test time was about 2.000000 seconds 00:24:38.634 00:24:38.634 Latency(us) 00:24:38.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.634 =================================================================================================================== 00:24:38.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.634 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2882094 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2882507 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2882507 /var/tmp/bperf.sock 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2882507 ']' 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:38.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.892 18:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:38.892 [2024-07-24 18:07:24.949889] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:38.892 [2024-07-24 18:07:24.949974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882507 ] 00:24:38.892 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:38.892 Zero copy mechanism will not be used. 00:24:38.892 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.892 [2024-07-24 18:07:25.011829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.892 [2024-07-24 18:07:25.131622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.150 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.150 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:39.150 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:39.150 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:39.150 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:39.409 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.409 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.667 nvme0n1 00:24:39.667 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:39.667 18:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.925 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.925 Zero copy mechanism will not be used. 00:24:39.925 Running I/O for 2 seconds... 00:24:41.822 00:24:41.822 Latency(us) 00:24:41.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:41.822 nvme0n1 : 2.00 3607.62 450.95 0.00 0.00 4429.86 3907.89 12718.84 00:24:41.822 =================================================================================================================== 00:24:41.822 Total : 3607.62 450.95 0.00 0.00 4429.86 3907.89 12718.84 00:24:41.822 0 00:24:41.822 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:41.822 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:41.822 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:41.822 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:41.822 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:41.822 | select(.opcode=="crc32c") 00:24:41.822 | "\(.module_name) \(.executed)"' 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2882507 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2882507 ']' 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2882507 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882507 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882507' 00:24:42.080 killing process with pid 2882507 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2882507 00:24:42.080 Received shutdown signal, test time was about 2.000000 seconds 00:24:42.080 00:24:42.080 Latency(us) 00:24:42.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.080 =================================================================================================================== 00:24:42.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.080 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2882507 00:24:42.338 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2883026 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2883026 /var/tmp/bperf.sock 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2883026 ']' 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:42.597 [2024-07-24 18:07:28.648734] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:42.597 [2024-07-24 18:07:28.648820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883026 ] 00:24:42.597 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.597 [2024-07-24 18:07:28.711871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.597 [2024-07-24 18:07:28.831504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:42.597 18:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:43.164 18:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.164 18:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.423 nvme0n1 00:24:43.423 18:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:43.423 18:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:43.423 Running I/O for 2 seconds... 00:24:45.954 00:24:45.954 Latency(us) 00:24:45.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.954 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:45.954 nvme0n1 : 2.00 21198.75 82.81 0.00 0.00 6027.88 2560.76 11311.03 00:24:45.954 =================================================================================================================== 00:24:45.954 Total : 21198.75 82.81 0.00 0.00 6027.88 2560.76 11311.03 00:24:45.954 0 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:45.954 | select(.opcode=="crc32c") 00:24:45.954 | "\(.module_name) \(.executed)"' 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2883026 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2883026 ']' 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2883026 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2883026 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.954 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2883026' 00:24:45.954 killing process with pid 2883026 00:24:45.955 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2883026 00:24:45.955 Received shutdown signal, test time was about 2.000000 seconds 00:24:45.955 00:24:45.955 Latency(us) 00:24:45.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.955 =================================================================================================================== 00:24:45.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.955 18:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2883026 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2883441 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2883441 /var/tmp/bperf.sock 00:24:46.213 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2883441 ']' 00:24:46.214 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:46.214 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.214 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:46.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:46.214 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.214 18:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.214 [2024-07-24 18:07:32.301930] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:46.214 [2024-07-24 18:07:32.302018] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883441 ] 00:24:46.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:46.214 Zero copy mechanism will not be used. 00:24:46.214 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.214 [2024-07-24 18:07:32.363560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.214 [2024-07-24 18:07:32.481908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.147 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.147 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:47.147 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.147 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.147 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.405 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.405 18:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.971 nvme0n1 00:24:47.971 18:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:47.971 18:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.229 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:48.229 Zero copy mechanism will not be used. 00:24:48.229 Running I/O for 2 seconds... 00:24:50.128 00:24:50.128 Latency(us) 00:24:50.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:50.128 nvme0n1 : 2.01 2830.90 353.86 0.00 0.00 5639.19 4247.70 11990.66 00:24:50.128 =================================================================================================================== 00:24:50.128 Total : 2830.90 353.86 0.00 0.00 5639.19 4247.70 11990.66 00:24:50.128 0 00:24:50.128 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.128 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.128 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.128 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.128 | select(.opcode=="crc32c") 00:24:50.128 | "\(.module_name) \(.executed)"' 00:24:50.128 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2883441 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2883441 ']' 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2883441 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2883441 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2883441' 00:24:50.386 killing process with pid 2883441 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2883441 00:24:50.386 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.386 00:24:50.386 Latency(us) 00:24:50.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.386 =================================================================================================================== 00:24:50.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.386 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2883441 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2882073 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2882073 ']' 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2882073 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882073 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882073' 00:24:50.644 killing process with pid 2882073 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2882073 00:24:50.644 18:07:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2882073 00:24:50.902 00:24:50.902 real 0m16.501s 00:24:50.902 user 0m33.205s 00:24:50.902 sys 0m4.113s 00:24:50.902 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.902 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.902 ************************************ 00:24:50.902 END TEST nvmf_digest_clean 00:24:50.902 ************************************ 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:51.161 ************************************ 00:24:51.161 START TEST nvmf_digest_error 00:24:51.161 ************************************ 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2884008 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2884008 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2884008 ']' 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.161 18:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.161 [2024-07-24 18:07:37.279798] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:51.161 [2024-07-24 18:07:37.279898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.161 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.161 [2024-07-24 18:07:37.347966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.419 [2024-07-24 18:07:37.466069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.419 [2024-07-24 18:07:37.466141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.419 [2024-07-24 18:07:37.466191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.419 [2024-07-24 18:07:37.466204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.419 [2024-07-24 18:07:37.466215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.419 [2024-07-24 18:07:37.466241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.985 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.985 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:51.985 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.985 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.986 [2024-07-24 18:07:38.244656] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.986 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.244 null0 00:24:52.244 [2024-07-24 18:07:38.357507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.244 [2024-07-24 18:07:38.381721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2884159 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2884159 /var/tmp/bperf.sock 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2884159 ']' 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.244 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.244 [2024-07-24 18:07:38.428522] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:52.244 [2024-07-24 18:07:38.428600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884159 ] 00:24:52.244 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.244 [2024-07-24 18:07:38.489479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.503 [2024-07-24 18:07:38.608670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.503 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.503 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:52.503 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.503 18:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.761 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:52.761 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.761 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:53.019 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.019 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.019 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.277 nvme0n1 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:53.277 18:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:53.534 Running I/O for 2 seconds... 00:24:53.534 [2024-07-24 18:07:39.591511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.534 [2024-07-24 18:07:39.591558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.534 [2024-07-24 18:07:39.591579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.534 [2024-07-24 18:07:39.605774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.605811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.621646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.621681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.621700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.634018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.634052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.634071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.649264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.649314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.664912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.664947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.664966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.677082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.677128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.677164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.692929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.692963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.692982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.705867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.705902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.705921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.720613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.720653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.720673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.734707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.734741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.734767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.746769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.746804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.746823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.760981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.761015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.761033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.775220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.775251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.775268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.787061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.787109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.787129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.535 [2024-07-24 18:07:39.801340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.535 [2024-07-24 18:07:39.801386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.535 [2024-07-24 18:07:39.801411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.816748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.816792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.816811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.830022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.830077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.845980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.846014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.846033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.860262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.860293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.860310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.876893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.876926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.876946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.888813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.888846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.888865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.793 [2024-07-24 18:07:39.905151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.793 [2024-07-24 18:07:39.905180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.793 [2024-07-24 18:07:39.905195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.916829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.916862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.916881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.932037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.932071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.932090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.945416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.945465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.945484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.959200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.959231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.959248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.970541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.970575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.970599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.985471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.985504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:39.999880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:39.999914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:39.999933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:40.014483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:40.014546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:40.014564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:40.025405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:40.025438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:40.025455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:40.039600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:40.039636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:40.039655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.794 [2024-07-24 18:07:40.052525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:53.794 [2024-07-24 18:07:40.052560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.794 [2024-07-24 18:07:40.052580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.052 [2024-07-24 18:07:40.067940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.067977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.067997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.080971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.081010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.081030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.094468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.094533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.094553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.109296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.109325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.109360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.124862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.124906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.124925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.136849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.136882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.136900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.152519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.152553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.152573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.167117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.167165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.167183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.182811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.182845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.182864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.193876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.193910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.193930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.209740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.209774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.209794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.222798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.222832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.222851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.238376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.238421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.238439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.252723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.252757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.252776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.265464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.265498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.265517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.281196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.281241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.281257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.294293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.294323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.294340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.307072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.053 [2024-07-24 18:07:40.307122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.053 [2024-07-24 18:07:40.307155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.053 [2024-07-24 18:07:40.321361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.321395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.321413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.337296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.337342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.337364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.351696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.351731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.351750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.364247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.364293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.364310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.378442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.378476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.390132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.390179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.390195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.406366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.406415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.406434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.422494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.422529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.422547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.434483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.434516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.448289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.448318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.448334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.463942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.463981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.464001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.476061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.476094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.476121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.489562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.489596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.489616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.506743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.506777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.506795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.518852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.518885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.518904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.535176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.535223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.549419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.549453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.549472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.564569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.564602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.564621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.312 [2024-07-24 18:07:40.576985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.312 [2024-07-24 18:07:40.577020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.312 [2024-07-24 18:07:40.577046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.591455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.591492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.591511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.606365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.606414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.606433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.620378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.632450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.632484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.632504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.645116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.645182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.660279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.660309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.660326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.677008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.677042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.677061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.689276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.689307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.689324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.703075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.703123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.703158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.717726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.717760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.717778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.731386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.731432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.731449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.742738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.758378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.758406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.758421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.771520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.771553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.771571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.786286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.786316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.786332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.799500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.799537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.799556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.811662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.811697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.811715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.572 [2024-07-24 18:07:40.826157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.572 [2024-07-24 18:07:40.826189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.572 [2024-07-24 18:07:40.826206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.842066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.842113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.854595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.854632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.872170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.872202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.872219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.883283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.883312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.883343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.899801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.899835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.899854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.914042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.914078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.914097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.930899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.930933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.930952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.942904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.942939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.942965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.956831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.956865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.956885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.971871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.971906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.971924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.831 [2024-07-24 18:07:40.983851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.831 [2024-07-24 18:07:40.983884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.831 [2024-07-24 18:07:40.983904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:40.996844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:40.996878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:40.996897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.010712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.010746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.010764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.023670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.023723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.037545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.037578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.037597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.050660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.050693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.050712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.063873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.063912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.063932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.076669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.076704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.076722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.832 [2024-07-24 18:07:41.091510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:54.832 [2024-07-24 18:07:41.091544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.832 [2024-07-24 18:07:41.091563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.109317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.109350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.109368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.125274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.125319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.125336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.137191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.137249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.153491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.153525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.166663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.166697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.166715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.180456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.180490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.180508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.194254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.194285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.194302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.206968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.207002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.207021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.221622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.221657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.221676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.233544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.233577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.233595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.249049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.249083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.090 [2024-07-24 18:07:41.263450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.090 [2024-07-24 18:07:41.263483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.090 [2024-07-24 18:07:41.263502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.277277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.277321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.277337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.289361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.289389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.289420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.303925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.303964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.303983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.319269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.319301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.319318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.331874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.331908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.331927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.091 [2024-07-24 18:07:41.346682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.091 [2024-07-24 18:07:41.346715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.091 [2024-07-24 18:07:41.346734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.360974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.349 [2024-07-24 18:07:41.361010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.349 [2024-07-24 18:07:41.361030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.374289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.349 [2024-07-24 18:07:41.374321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.349 [2024-07-24 18:07:41.374339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.387211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.349 [2024-07-24 18:07:41.387243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.349 [2024-07-24 18:07:41.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.400244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.349 [2024-07-24 18:07:41.400272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.349 [2024-07-24 18:07:41.400302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.415282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.349 [2024-07-24 18:07:41.415312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.349 [2024-07-24 18:07:41.415330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.349 [2024-07-24 18:07:41.427166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.427196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.427213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.444240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.444269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.444303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.455478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.455511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.455530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.470811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.470844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.470864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.483033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.483067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.483085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.497454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.497487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.497506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.512566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.512601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.512620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.526646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.526680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.526699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.541243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.541274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.541297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.553481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.553514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.553533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.569404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.569452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 [2024-07-24 18:07:41.581868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d5dcb0) 00:24:55.350 [2024-07-24 18:07:41.581903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.350 [2024-07-24 18:07:41.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.350 00:24:55.350 Latency(us) 00:24:55.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.350 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:55.350 nvme0n1 : 2.01 18191.96 71.06 0.00 0.00 7026.37 3398.16 19418.07 00:24:55.350 =================================================================================================================== 00:24:55.350 Total : 18191.96 71.06 0.00 0.00 7026.37 3398.16 19418.07 00:24:55.350 0 00:24:55.350 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:55.350 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:55.350 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:55.350 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:55.350 | .driver_specific 00:24:55.350 | .nvme_error 00:24:55.350 | .status_code 00:24:55.350 | .command_transient_transport_error' 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2884159 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2884159 ']' 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2884159 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884159 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884159' 00:24:55.608 killing process with pid 2884159 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2884159 00:24:55.608 Received shutdown signal, test time was about 2.000000 seconds 00:24:55.608 00:24:55.608 Latency(us) 00:24:55.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.608 =================================================================================================================== 00:24:55.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.608 18:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2884159 00:24:55.866 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:55.866 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:55.866 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:55.866 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:55.866 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2884686 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2884686 /var/tmp/bperf.sock 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2884686 ']' 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.867 18:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.153 [2024-07-24 18:07:42.172973] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:24:56.153 [2024-07-24 18:07:42.173056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884686 ] 00:24:56.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:56.153 Zero copy mechanism will not be used. 00:24:56.153 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.153 [2024-07-24 18:07:42.234170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.153 [2024-07-24 18:07:42.348946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.120 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.120 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:57.120 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:57.120 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.378 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.635 nvme0n1 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:57.635 18:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.893 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.893 Zero copy mechanism will not be used. 00:24:57.893 Running I/O for 2 seconds... 00:24:57.893 [2024-07-24 18:07:43.940008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.940078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.940099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.948970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.949001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.949035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.957722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.957752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.957785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.966555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.966585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.966617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.975499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.975528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.975559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.984313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.984342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.984359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:43.993130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:43.993159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:43.993176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:44.001953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:44.001982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:44.001999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:44.010767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:44.010795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:44.010826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.893 [2024-07-24 18:07:44.019653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.893 [2024-07-24 18:07:44.019680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.893 [2024-07-24 18:07:44.019712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.028417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.028459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.028475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.037233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.037261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.037278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.046445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.046488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.046504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.055354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.055383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.055407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.064417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.064461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.073781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.073827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.073845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.084063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.084095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.084135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.094360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.094392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.094410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.105218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.105248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.105265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.116122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.116168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.116186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.127313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.127358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.127375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.138112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.138156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.138174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.148565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.148631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.894 [2024-07-24 18:07:44.159603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:57.894 [2024-07-24 18:07:44.159634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.894 [2024-07-24 18:07:44.159667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.170803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.170833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.170866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.181529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.181590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.192403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.192435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.192452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.203277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.203308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.203326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.215212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.215244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.215261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.225656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.225691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.225710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.237043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.237077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.248373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.248425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.248444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.258367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.258396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.258412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.268608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.268642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.268661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.278571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.278606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.278625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.289849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.289883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.289901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.301498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.301533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.301552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.312390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.312434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.312451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.322284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.322313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.322343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.332353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.332402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.342193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.342221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.351992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.352024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.352042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.361763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.361796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.361815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.371422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.371450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.371481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.380945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.380977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.380995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.390649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.153 [2024-07-24 18:07:44.390682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.153 [2024-07-24 18:07:44.390700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.153 [2024-07-24 18:07:44.400432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.154 [2024-07-24 18:07:44.400475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.154 [2024-07-24 18:07:44.400490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.154 [2024-07-24 18:07:44.410282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.154 [2024-07-24 18:07:44.410325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.154 [2024-07-24 18:07:44.410341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.154 [2024-07-24 18:07:44.420095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.154 [2024-07-24 18:07:44.420158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.154 [2024-07-24 18:07:44.420177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.430067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.430114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.430150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.439820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.439853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.449626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.449660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.459241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.459286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.459303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.468935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.468968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.468986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.478743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.478776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.478794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.488441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.488486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.488505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.498184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.498211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.498236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.507916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.507948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.507967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.517578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.517610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.517628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.527274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.527302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.527318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.537170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.537199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.537216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.547270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.547298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.547329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.557413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.557459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.567405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.567451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.567467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.577196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.577253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.586849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.586887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.586907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.596519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.596551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.596569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.606233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.606259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.606274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.615965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.615997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.616015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.625600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.625631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.625649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.635279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.635321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.412 [2024-07-24 18:07:44.635336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.412 [2024-07-24 18:07:44.644929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.412 [2024-07-24 18:07:44.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.413 [2024-07-24 18:07:44.644979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.413 [2024-07-24 18:07:44.654554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.413 [2024-07-24 18:07:44.654586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.413 [2024-07-24 18:07:44.654604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.413 [2024-07-24 18:07:44.664313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.413 [2024-07-24 18:07:44.664342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.413 [2024-07-24 18:07:44.664359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.413 [2024-07-24 18:07:44.673941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.413 [2024-07-24 18:07:44.673972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.413 [2024-07-24 18:07:44.673991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.671 [2024-07-24 18:07:44.684039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.671 [2024-07-24 18:07:44.684087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.671 [2024-07-24 18:07:44.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.671 [2024-07-24 18:07:44.693786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.671 [2024-07-24 18:07:44.693821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.671 [2024-07-24 18:07:44.693840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.703415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.703443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.703458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.713164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.713194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.713225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.722767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.722801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.722819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.732475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.732508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.732526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.742365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.742408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.742423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.752395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.752439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.762237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.762266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.762282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.772250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.772294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.772311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.781891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.781926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.781945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.793602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.793637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.793656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.804206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.804237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.815821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.815874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.827584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.827619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.827637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.839237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.839269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.839301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.850719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.850780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.862695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.862730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.862750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.873903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.873940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.884825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.884859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.884879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.895843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.895898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.907047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.907081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.907100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.918012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.918046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.918065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.929420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.929468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.929488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.672 [2024-07-24 18:07:44.939635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.672 [2024-07-24 18:07:44.939672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.672 [2024-07-24 18:07:44.939691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:44.951632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:44.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:44.951688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:44.963458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:44.963493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:44.963513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:44.974497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:44.974532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:44.974552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:44.986292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:44.986339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:44.986356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:44.997524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:44.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:44.997578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:45.009726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:45.009762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:45.009781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.931 [2024-07-24 18:07:45.020821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.931 [2024-07-24 18:07:45.020856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.931 [2024-07-24 18:07:45.020875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.033632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.033667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.033686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.045358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.045409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.045427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.057678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.057712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.057731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.070000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.070034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.070053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.080822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.080858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.080877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.092112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.092145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.092163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.102420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.102475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.102492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.112739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.112771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.112788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.122466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.122498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.122515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.131665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.131696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.131712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.141118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.141153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.141170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.149980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.150010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.150027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.158896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.158925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.158942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.168013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.168043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.168060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.176994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.185805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.185835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.185851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.932 [2024-07-24 18:07:45.194633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:58.932 [2024-07-24 18:07:45.194664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.932 [2024-07-24 18:07:45.194680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.203746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.203779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.203797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.212463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.212495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.212521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.221286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.221317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.221334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.230294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.230326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.230343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.240151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.240183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.240200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.249630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.249661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.249679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.259998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.260030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.260047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.269071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.269110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.269129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.277782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.277812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.277829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.191 [2024-07-24 18:07:45.286602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.191 [2024-07-24 18:07:45.286632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.191 [2024-07-24 18:07:45.286649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.295675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.295711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.295729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.304544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.304574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.313401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.313430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.313447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.322384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.322415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.322432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.331875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.331907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.331924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.340816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.340848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.340864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.350572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.350604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.360043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.360074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.360091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.368874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.368905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.377605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.377635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.377651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.386492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.386522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.386539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.395298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.395327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.395344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.404060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.404090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.404116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.412749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.412780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.412796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.421622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.421652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.421669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.430851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.430883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.440313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.440360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.449528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.449559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.449598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.192 [2024-07-24 18:07:45.459482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.192 [2024-07-24 18:07:45.459520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.192 [2024-07-24 18:07:45.459542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.468985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.469018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.469035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.477957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.477988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.478006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.486727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.486758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.486775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.495685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.495715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.495732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.504552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.504583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.504599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.513372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.513402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.513419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.522179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.522208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.531950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.541897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.541928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.541946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.551208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.551239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.551257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.560363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.560393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.560410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.569231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.569260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.569277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.578053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.578083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.578100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.586813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.586843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.595740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.595770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.595787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.604540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.604569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.604586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.613342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.613389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.622375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.622404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.622420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.631734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.631764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.631781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.640726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.640755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.640771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.649522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.452 [2024-07-24 18:07:45.649552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.452 [2024-07-24 18:07:45.649568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.452 [2024-07-24 18:07:45.658460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.658490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.658506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.667235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.667264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.667281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.676175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.676204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.676221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.685015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.685052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.685070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.693803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.693833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.693850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.702529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.702560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.702576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.453 [2024-07-24 18:07:45.711441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.453 [2024-07-24 18:07:45.711471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.453 [2024-07-24 18:07:45.711487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.720342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.720375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.720402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.729406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.729438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.729456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.738226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.738257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.738273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.747114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.747144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.747160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.756005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.756034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.756051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.764732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.764762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.764779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.773546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.773575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.773592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.712 [2024-07-24 18:07:45.782412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.712 [2024-07-24 18:07:45.782441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.712 [2024-07-24 18:07:45.782458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.791441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.791470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.791487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.800263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.800292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.800309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.809082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.809120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.809138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.817946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.817975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.817992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.826811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.826840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.826857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.835601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.835654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.844439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.844482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.844498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.853279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.853308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.853324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.862188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.862216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.862233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.870999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.871028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.871045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.879961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.879993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.880011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.888837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.888884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.897678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.897707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.897723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.906715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.906760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.915525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.915561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.915579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.924308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.924337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.924353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:59.713 [2024-07-24 18:07:45.933174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a4290) 00:24:59.713 [2024-07-24 18:07:45.933204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.713 [2024-07-24 18:07:45.933221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:59.713 00:24:59.713 Latency(us) 00:24:59.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.713 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:59.713 nvme0n1 : 2.00 3191.60 398.95 0.00 0.00 5007.38 1541.31 12718.84 00:24:59.713 =================================================================================================================== 00:24:59.713 Total : 3191.60 398.95 0.00 0.00 5007.38 1541.31 12718.84 00:24:59.713 0 00:24:59.713 18:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:59.713 18:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:59.713 18:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:59.713 18:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:59.713 | .driver_specific 00:24:59.714 | .nvme_error 00:24:59.714 | .status_code 00:24:59.714 | .command_transient_transport_error' 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2884686 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2884686 ']' 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2884686 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884686 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884686' 00:24:59.971 killing process with pid 2884686 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2884686 00:24:59.971 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.971 00:24:59.971 Latency(us) 00:24:59.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.971 =================================================================================================================== 00:24:59.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.971 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2884686 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2885120 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2885120 /var/tmp/bperf.sock 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2885120 ']' 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.534 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.534 [2024-07-24 18:07:46.557623] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:00.534 [2024-07-24 18:07:46.557706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885120 ] 00:25:00.534 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.534 [2024-07-24 18:07:46.622738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.534 [2024-07-24 18:07:46.740604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.791 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.791 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:00.791 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:00.791 18:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:01.048 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:01.048 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.048 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.049 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.049 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.049 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.615 nvme0n1 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:01.615 18:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.615 Running I/O for 2 seconds... 00:25:01.615 [2024-07-24 18:07:47.730247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190edd58 00:25:01.615 [2024-07-24 18:07:47.731388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.731444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.742497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fa3a0 00:25:01.615 [2024-07-24 18:07:47.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.743638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.755973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e3d08 00:25:01.615 [2024-07-24 18:07:47.757290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.757320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.769398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e49b0 00:25:01.615 [2024-07-24 18:07:47.770849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.770881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.782835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e7c50 00:25:01.615 [2024-07-24 18:07:47.784463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.784495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.796244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fa3a0 00:25:01.615 [2024-07-24 18:07:47.798026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.798058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.808172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f0bc0 00:25:01.615 [2024-07-24 18:07:47.809414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.809457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.821029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e8088 00:25:01.615 [2024-07-24 18:07:47.822119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.822167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.832985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8a50 00:25:01.615 [2024-07-24 18:07:47.834892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.834923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.843849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ed0b0 00:25:01.615 [2024-07-24 18:07:47.844778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.844808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.858051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f35f0 00:25:01.615 [2024-07-24 18:07:47.859181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.859209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.871084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc128 00:25:01.615 [2024-07-24 18:07:47.872359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.615 [2024-07-24 18:07:47.872387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.615 [2024-07-24 18:07:47.883096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebb98 00:25:01.873 [2024-07-24 18:07:47.884361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.873 [2024-07-24 18:07:47.884407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:01.873 [2024-07-24 18:07:47.896537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e4578 00:25:01.873 [2024-07-24 18:07:47.897976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.873 [2024-07-24 18:07:47.898010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:01.873 [2024-07-24 18:07:47.909893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190df988 00:25:01.873 [2024-07-24 18:07:47.911561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.873 [2024-07-24 18:07:47.911594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:01.873 [2024-07-24 18:07:47.923220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8e88 00:25:01.873 [2024-07-24 18:07:47.925017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.873 [2024-07-24 18:07:47.925048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:01.873 [2024-07-24 18:07:47.936576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f0ff8 00:25:01.873 [2024-07-24 18:07:47.938542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:47.949900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eea00 00:25:01.874 [2024-07-24 18:07:47.952028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:47.958904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e7818 00:25:01.874 [2024-07-24 18:07:47.959847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.959877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:47.971769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f46d0 00:25:01.874 [2024-07-24 18:07:47.972696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.972728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:47.984783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e0a68 00:25:01.874 [2024-07-24 18:07:47.985521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.985552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:47.998098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7da8 00:25:01.874 [2024-07-24 18:07:47.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:47.999029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.011312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4298 00:25:01.874 [2024-07-24 18:07:48.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.012388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.023278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ed0b0 00:25:01.874 [2024-07-24 18:07:48.025158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.025191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.034217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f6890 00:25:01.874 [2024-07-24 18:07:48.035122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.035166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.047477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebb98 00:25:01.874 [2024-07-24 18:07:48.048595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.048626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.060763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7da8 00:25:01.874 [2024-07-24 18:07:48.062022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.062052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.074061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f81e0 00:25:01.874 [2024-07-24 18:07:48.075544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.075575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.087361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f57b0 00:25:01.874 [2024-07-24 18:07:48.088949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.088979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.100640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebb98 00:25:01.874 [2024-07-24 18:07:48.102409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.102440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.113917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e9e10 00:25:01.874 [2024-07-24 18:07:48.115868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.115898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.127235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8618 00:25:01.874 [2024-07-24 18:07:48.129335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.129363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:01.874 [2024-07-24 18:07:48.136219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e84c0 00:25:01.874 [2024-07-24 18:07:48.137134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.874 [2024-07-24 18:07:48.137177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.148425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e1b48 00:25:02.132 [2024-07-24 18:07:48.149409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.149444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.161727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4b08 00:25:02.132 [2024-07-24 18:07:48.162770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.162802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.175042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e23b8 00:25:02.132 [2024-07-24 18:07:48.176298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.176327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.188313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e1f80 00:25:02.132 [2024-07-24 18:07:48.189739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.189770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.201604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f3e60 00:25:02.132 [2024-07-24 18:07:48.203175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.203203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.214810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4b08 00:25:02.132 [2024-07-24 18:07:48.216568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.216599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.228057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e99d8 00:25:02.132 [2024-07-24 18:07:48.229966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.229997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.239925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190df550 00:25:02.132 [2024-07-24 18:07:48.241352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.241396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.251485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f1ca0 00:25:02.132 [2024-07-24 18:07:48.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.253393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.262346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e6b70 00:25:02.132 [2024-07-24 18:07:48.263305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.132 [2024-07-24 18:07:48.263332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.132 [2024-07-24 18:07:48.276485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fbcf0 00:25:02.132 [2024-07-24 18:07:48.277553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.277585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.289625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc998 00:25:02.133 [2024-07-24 18:07:48.290862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.290893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.301636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7da8 00:25:02.133 [2024-07-24 18:07:48.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.302893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.315216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f81e0 00:25:02.133 [2024-07-24 18:07:48.316616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.316647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.328553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f6458 00:25:02.133 [2024-07-24 18:07:48.330122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.330168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.340316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190dece0 00:25:02.133 [2024-07-24 18:07:48.341460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.341491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.352819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e4140 00:25:02.133 [2024-07-24 18:07:48.353910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.353946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.367094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fd208 00:25:02.133 [2024-07-24 18:07:48.368900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.368931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.380435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e9e10 00:25:02.133 [2024-07-24 18:07:48.382444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:02.133 [2024-07-24 18:07:48.392282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eff18 00:25:02.133 [2024-07-24 18:07:48.393666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.133 [2024-07-24 18:07:48.393697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:02.391 [2024-07-24 18:07:48.403836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190de038 00:25:02.391 [2024-07-24 18:07:48.405912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.391 [2024-07-24 18:07:48.405946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.391 [2024-07-24 18:07:48.414681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f0788 00:25:02.391 [2024-07-24 18:07:48.415563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.391 [2024-07-24 18:07:48.415594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:02.391 [2024-07-24 18:07:48.428758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ea248 00:25:02.391 [2024-07-24 18:07:48.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.391 [2024-07-24 18:07:48.429877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.391 [2024-07-24 18:07:48.441894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7100 00:25:02.391 [2024-07-24 18:07:48.443180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.391 [2024-07-24 18:07:48.443207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.391 [2024-07-24 18:07:48.454775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f5be8 00:25:02.391 [2024-07-24 18:07:48.456034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.391 [2024-07-24 18:07:48.456065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.467772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e73e0 00:25:02.392 [2024-07-24 18:07:48.469228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.469256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.480607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eea00 00:25:02.392 [2024-07-24 18:07:48.482017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.482047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.493306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e84c0 00:25:02.392 [2024-07-24 18:07:48.494723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.494754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.505814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e49b0 00:25:02.392 [2024-07-24 18:07:48.507304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.507332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.518510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebb98 00:25:02.392 [2024-07-24 18:07:48.519933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.519964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.529933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fe720 00:25:02.392 [2024-07-24 18:07:48.531228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.541950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fbcf0 00:25:02.392 [2024-07-24 18:07:48.542872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.542902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.555027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e6738 00:25:02.392 [2024-07-24 18:07:48.556123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.556167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.568350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e73e0 00:25:02.392 [2024-07-24 18:07:48.569583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.569614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.580294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e9168 00:25:02.392 [2024-07-24 18:07:48.581563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.581594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.593643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e95a0 00:25:02.392 [2024-07-24 18:07:48.595037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.595068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.605501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f31b8 00:25:02.392 [2024-07-24 18:07:48.606451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.606482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.617977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc998 00:25:02.392 [2024-07-24 18:07:48.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.618904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.630655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e27f0 00:25:02.392 [2024-07-24 18:07:48.631549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.631579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.643310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190dece0 00:25:02.392 [2024-07-24 18:07:48.644222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.644249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.392 [2024-07-24 18:07:48.656047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190dfdc0 00:25:02.392 [2024-07-24 18:07:48.656967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.392 [2024-07-24 18:07:48.657000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.650 [2024-07-24 18:07:48.668856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ee190 00:25:02.650 [2024-07-24 18:07:48.669741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.650 [2024-07-24 18:07:48.669774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.650 [2024-07-24 18:07:48.681383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebfd0 00:25:02.650 [2024-07-24 18:07:48.682310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.682344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.694142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e9e10 00:25:02.651 [2024-07-24 18:07:48.695009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.695041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.706805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8e88 00:25:02.651 [2024-07-24 18:07:48.707703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.707734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.719393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f6cc8 00:25:02.651 [2024-07-24 18:07:48.720371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.720414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.732057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f6020 00:25:02.651 [2024-07-24 18:07:48.732923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.732954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.744756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fda78 00:25:02.651 [2024-07-24 18:07:48.745648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.745680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.757304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e3060 00:25:02.651 [2024-07-24 18:07:48.758228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.758257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.770036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f1430 00:25:02.651 [2024-07-24 18:07:48.770938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.770969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.784240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ef6a8 00:25:02.651 [2024-07-24 18:07:48.785825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.796180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e84c0 00:25:02.651 [2024-07-24 18:07:48.797307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.808752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ec840 00:25:02.651 [2024-07-24 18:07:48.809834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.809865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.821358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fa7d8 00:25:02.651 [2024-07-24 18:07:48.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.822522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.834002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e73e0 00:25:02.651 [2024-07-24 18:07:48.835077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.835115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.846693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eea00 00:25:02.651 [2024-07-24 18:07:48.847724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.847753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.859236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e5658 00:25:02.651 [2024-07-24 18:07:48.860332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.860360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.871880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eaab8 00:25:02.651 [2024-07-24 18:07:48.872937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.872967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.884573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f0bc0 00:25:02.651 [2024-07-24 18:07:48.885598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.885629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.898819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f2510 00:25:02.651 [2024-07-24 18:07:48.900541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.900571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.651 [2024-07-24 18:07:48.912159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4298 00:25:02.651 [2024-07-24 18:07:48.914055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.651 [2024-07-24 18:07:48.914087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.923928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e0a68 00:25:02.910 [2024-07-24 18:07:48.925400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.925449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.935491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ee5c8 00:25:02.910 [2024-07-24 18:07:48.937364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.937413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.946325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e6b70 00:25:02.910 [2024-07-24 18:07:48.947288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.947316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.959511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f2948 00:25:02.910 [2024-07-24 18:07:48.960555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.960586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.972843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ebb98 00:25:02.910 [2024-07-24 18:07:48.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.974114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:48.987005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f9b30 00:25:02.910 [2024-07-24 18:07:48.988446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:48.988478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.000072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190df118 00:25:02.910 [2024-07-24 18:07:49.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.010819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f46d0 00:25:02.910 [2024-07-24 18:07:49.011506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.011542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.024047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e4578 00:25:02.910 [2024-07-24 18:07:49.024943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.024974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.038646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e5a90 00:25:02.910 [2024-07-24 18:07:49.040543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.040573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.051948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f6cc8 00:25:02.910 [2024-07-24 18:07:49.054048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.054078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.060983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fa3a0 00:25:02.910 [2024-07-24 18:07:49.061878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.061908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.072974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fac10 00:25:02.910 [2024-07-24 18:07:49.073849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.073879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.086320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7da8 00:25:02.910 [2024-07-24 18:07:49.087530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.087560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.099750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ee190 00:25:02.910 [2024-07-24 18:07:49.100999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.101029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.113948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8a50 00:25:02.910 [2024-07-24 18:07:49.115449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.115492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.127034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e5220 00:25:02.910 [2024-07-24 18:07:49.128638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.139043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4b08 00:25:02.910 [2024-07-24 18:07:49.140607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.140637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.152413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e1f80 00:25:02.910 [2024-07-24 18:07:49.154171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.165319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e1710 00:25:02.910 [2024-07-24 18:07:49.167300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.910 [2024-07-24 18:07:49.167328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:02.910 [2024-07-24 18:07:49.177273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e9168 00:25:02.910 [2024-07-24 18:07:49.178677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.911 [2024-07-24 18:07:49.178710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.188836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f7538 00:25:03.169 [2024-07-24 18:07:49.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.169 [2024-07-24 18:07:49.190748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.199770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fcdd0 00:25:03.169 [2024-07-24 18:07:49.200667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.169 [2024-07-24 18:07:49.200698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.213133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f3a28 00:25:03.169 [2024-07-24 18:07:49.214231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.169 [2024-07-24 18:07:49.214259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.226435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f96f8 00:25:03.169 [2024-07-24 18:07:49.227665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.169 [2024-07-24 18:07:49.227696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.239763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f9b30 00:25:03.169 [2024-07-24 18:07:49.241202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.169 [2024-07-24 18:07:49.241231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.169 [2024-07-24 18:07:49.253058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e6738 00:25:03.170 [2024-07-24 18:07:49.254644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.254675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.266386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f3a28 00:25:03.170 [2024-07-24 18:07:49.268139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.268185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.279660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e5a90 00:25:03.170 [2024-07-24 18:07:49.281591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.281623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.292956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ec408 00:25:03.170 [2024-07-24 18:07:49.295071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.295108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.301994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4f40 00:25:03.170 [2024-07-24 18:07:49.302884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.302915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.315261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8a50 00:25:03.170 [2024-07-24 18:07:49.316357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.316402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.328183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8618 00:25:03.170 [2024-07-24 18:07:49.329553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.329583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.341570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8a50 00:25:03.170 [2024-07-24 18:07:49.342960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.342991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.354851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f1ca0 00:25:03.170 [2024-07-24 18:07:49.356454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.356485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.368238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fcdd0 00:25:03.170 [2024-07-24 18:07:49.369978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.370008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.380088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e12d8 00:25:03.170 [2024-07-24 18:07:49.381325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.381353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.392582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190eea00 00:25:03.170 [2024-07-24 18:07:49.393796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.393826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.405695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e8d30 00:25:03.170 [2024-07-24 18:07:49.407110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.407164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.417701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc998 00:25:03.170 [2024-07-24 18:07:49.419107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.419153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.170 [2024-07-24 18:07:49.430909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e5658 00:25:03.170 [2024-07-24 18:07:49.432492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.170 [2024-07-24 18:07:49.432523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.442799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fef90 00:25:03.429 [2024-07-24 18:07:49.443856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.443890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.455725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f92c0 00:25:03.429 [2024-07-24 18:07:49.456635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.456675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.469151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f0bc0 00:25:03.429 [2024-07-24 18:07:49.470222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.470251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.481180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f8a50 00:25:03.429 [2024-07-24 18:07:49.483094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.483153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.492087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fe720 00:25:03.429 [2024-07-24 18:07:49.492958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.492989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.506255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f3e60 00:25:03.429 [2024-07-24 18:07:49.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.507391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.519382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e0ea0 00:25:03.429 [2024-07-24 18:07:49.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.520676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.531445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc998 00:25:03.429 [2024-07-24 18:07:49.532662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.532694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.544817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fd640 00:25:03.429 [2024-07-24 18:07:49.546237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.546265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.556643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f4f40 00:25:03.429 [2024-07-24 18:07:49.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.569155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ecc78 00:25:03.429 [2024-07-24 18:07:49.570026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.570056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.581838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f5be8 00:25:03.429 [2024-07-24 18:07:49.582738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.582768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.594481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fe2e8 00:25:03.429 [2024-07-24 18:07:49.595428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.595459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.607503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e84c0 00:25:03.429 [2024-07-24 18:07:49.608211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.608238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.620457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f96f8 00:25:03.429 [2024-07-24 18:07:49.621497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.621528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.633088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fc128 00:25:03.429 [2024-07-24 18:07:49.634164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.634191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.645793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190df988 00:25:03.429 [2024-07-24 18:07:49.646853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.646883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.658473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e0ea0 00:25:03.429 [2024-07-24 18:07:49.659521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.659551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.671036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190ee5c8 00:25:03.429 [2024-07-24 18:07:49.672114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.683739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190f35f0 00:25:03.429 [2024-07-24 18:07:49.684801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.684831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.429 [2024-07-24 18:07:49.696371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e7818 00:25:03.429 [2024-07-24 18:07:49.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.429 [2024-07-24 18:07:49.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.687 [2024-07-24 18:07:49.709478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190fbcf0 00:25:03.688 [2024-07-24 18:07:49.710363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.688 [2024-07-24 18:07:49.710393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.688 [2024-07-24 18:07:49.722692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7f30) with pdu=0x2000190e0a68 00:25:03.688 [2024-07-24 18:07:49.723891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.688 [2024-07-24 18:07:49.723923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.688 00:25:03.688 Latency(us) 00:25:03.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.688 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:03.688 nvme0n1 : 2.00 20119.25 78.59 0.00 0.00 6350.59 2609.30 15728.64 00:25:03.688 =================================================================================================================== 00:25:03.688 Total : 20119.25 78.59 0.00 0.00 6350.59 2609.30 15728.64 00:25:03.688 0 00:25:03.688 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:03.688 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:03.688 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:03.688 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:03.688 | .driver_specific 00:25:03.688 | .nvme_error 00:25:03.688 | .status_code 00:25:03.688 | .command_transient_transport_error' 00:25:03.946 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:25:03.946 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2885120 00:25:03.946 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2885120 ']' 00:25:03.946 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2885120 00:25:03.946 18:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885120 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885120' 00:25:03.946 killing process with pid 2885120 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2885120 00:25:03.946 Received shutdown signal, test time was about 2.000000 seconds 00:25:03.946 00:25:03.946 Latency(us) 00:25:03.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.946 =================================================================================================================== 00:25:03.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.946 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2885120 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2885632 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2885632 /var/tmp/bperf.sock 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2885632 ']' 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.204 18:07:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.204 [2024-07-24 18:07:50.357787] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:04.204 [2024-07-24 18:07:50.357870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885632 ] 00:25:04.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.204 Zero copy mechanism will not be used. 00:25:04.204 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.204 [2024-07-24 18:07:50.420682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.462 [2024-07-24 18:07:50.535316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.395 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.652 nvme0n1 00:25:05.652 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:05.652 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.652 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.910 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.910 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:05.910 18:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.910 Zero copy mechanism will not be used. 00:25:05.910 Running I/O for 2 seconds... 00:25:05.910 [2024-07-24 18:07:52.036189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.036611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.036652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.049539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.049969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.050003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.062091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.062464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.062496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.074516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.074894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.088031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.088419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.088468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.100262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.100637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.910 [2024-07-24 18:07:52.100669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.910 [2024-07-24 18:07:52.113083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.910 [2024-07-24 18:07:52.113487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.911 [2024-07-24 18:07:52.125463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.911 [2024-07-24 18:07:52.125838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.125869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.911 [2024-07-24 18:07:52.137723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.911 [2024-07-24 18:07:52.138065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.138093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.911 [2024-07-24 18:07:52.149378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.911 [2024-07-24 18:07:52.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.149751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.911 [2024-07-24 18:07:52.160005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.911 [2024-07-24 18:07:52.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.160276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.911 [2024-07-24 18:07:52.171638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:05.911 [2024-07-24 18:07:52.172018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.911 [2024-07-24 18:07:52.172047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.184132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.184353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.184383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.195361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.195723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.195752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.208081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.208470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.208514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.219452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.219779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.219821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.231924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.232351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.244536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.244881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.169 [2024-07-24 18:07:52.244908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.169 [2024-07-24 18:07:52.257320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.169 [2024-07-24 18:07:52.257665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.257694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.268601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.268928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.268956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.280667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.281022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.281050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.292519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.292879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.292912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.305051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.305468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.317076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.317458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.317501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.329888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.330035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.330063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.341232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.341618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.341647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.353536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.353884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.353911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.365083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.365466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.365494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.376322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.376659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.376687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.388117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.388579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.388605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.399178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.399623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.399650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.410632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.410991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.411034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.422223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.422568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.422597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.170 [2024-07-24 18:07:52.433587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.170 [2024-07-24 18:07:52.434067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.170 [2024-07-24 18:07:52.434112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.446008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.446633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.446662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.459326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.459837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.459866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.471116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.471523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.471551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.482222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.482621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.482648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.493719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.494160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.494189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.504964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.505276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.505305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.516451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.516889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.516917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.527468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.527869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.527911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.538456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.538826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.430 [2024-07-24 18:07:52.538855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.430 [2024-07-24 18:07:52.549673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.430 [2024-07-24 18:07:52.550078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.550113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.560891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.561242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.561270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.572160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.572660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.572701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.583415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.583874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.583921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.594592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.594978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.595025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.605417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.605764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.605792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.616758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.617171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.627566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.627964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.638525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.638978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.649899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.650345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.650373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.661472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.661931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.661959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.673007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.673371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.684166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.684591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.684634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.431 [2024-07-24 18:07:52.695661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.431 [2024-07-24 18:07:52.696065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.431 [2024-07-24 18:07:52.696118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.706545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.706900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.706931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.717777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.718232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.718260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.728165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.728603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.728632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.738509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.738856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.738885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.749330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.749752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.749781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.689 [2024-07-24 18:07:52.760112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.689 [2024-07-24 18:07:52.760461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.689 [2024-07-24 18:07:52.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.771303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.771704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.771732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.782679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.783149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.783183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.793352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.793737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.793765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.804637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.805098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.805150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.815671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.816089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.827139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.827563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.827590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.838439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.838908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.838952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.848715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.849151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.849179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.859819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.860156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.860185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.870942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.871368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.871411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.881161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.881655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.881682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.892015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.892419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.892448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.903291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.903758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.903785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.913610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.914116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.914145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.925109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.925572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.925601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.936712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.937087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.690 [2024-07-24 18:07:52.947235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.690 [2024-07-24 18:07:52.947656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.690 [2024-07-24 18:07:52.947684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:52.958393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:52.958857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:52.958889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:52.970257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:52.970720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:52.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:52.979812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:52.980282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:52.980311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:52.991551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:52.991994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:52.992021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.002559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.002989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.003017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.014329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.014836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.024644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.025041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.025069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.035870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.036310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.036339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.046941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.047384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.047413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.058036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.058489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.058517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.068780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.069174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.069208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.080930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.081373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.081401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.092113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.092582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.092625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.103375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.103775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.103803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.115068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.115544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.115572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.126905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.127385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.138425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.138849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.138877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.149572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.149889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.149917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.160396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.171597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.171966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.171994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.182667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.183076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.183110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.194365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.194775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.194803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.948 [2024-07-24 18:07:53.206277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:06.948 [2024-07-24 18:07:53.206622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.948 [2024-07-24 18:07:53.206664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.217201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.217648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.217678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.228604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.228921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.228951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.239996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.240421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.240463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.251595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.252019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.252051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.262720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.263115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.263143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.272890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.273314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.273342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.284042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.284451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.284479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.296154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.296573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.296616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.307702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.308115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.308144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.318698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.319069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.319098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.329500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.329951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.329979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.340402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.340796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.340840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.351244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.351636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.351663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.362240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.362660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.362692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.372037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.372432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.372461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.381645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.382092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.382131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.392752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.393083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.207 [2024-07-24 18:07:53.393119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.207 [2024-07-24 18:07:53.404090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.207 [2024-07-24 18:07:53.404524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.404552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.415015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.415361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.415390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.426069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.426410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.426438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.436958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.437302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.437331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.447690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.448044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.448093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.459886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.460331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.208 [2024-07-24 18:07:53.470894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.208 [2024-07-24 18:07:53.471229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.208 [2024-07-24 18:07:53.471266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.481688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.482016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.482046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.492179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.492581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.492610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.503286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.503603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.503631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.515207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.515541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.526164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.526533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.537427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.537865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.537892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.548619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.548911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.559269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.559608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.559636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.571062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.571419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.582084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.582479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.582508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.593479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.593887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.593913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.604548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.604853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.604881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.616145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.616480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.616525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.627236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.627581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.627609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.638243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.638542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.638570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.649667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.650137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.661985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.467 [2024-07-24 18:07:53.662389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.467 [2024-07-24 18:07:53.662418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.467 [2024-07-24 18:07:53.673393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.673791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.673819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.468 [2024-07-24 18:07:53.684820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.685135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.685163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.468 [2024-07-24 18:07:53.695861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.696207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.468 [2024-07-24 18:07:53.707255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.707651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.707679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.468 [2024-07-24 18:07:53.718264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.718607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.718635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.468 [2024-07-24 18:07:53.729758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.468 [2024-07-24 18:07:53.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.468 [2024-07-24 18:07:53.730201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.741006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.741416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.751842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.752313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.752343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.762882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.763248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.763277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.774256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.774615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.774658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.786029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.786358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.786387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.796118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.796567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.796610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.807256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.807640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.818059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.818432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.818460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.726 [2024-07-24 18:07:53.829371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.726 [2024-07-24 18:07:53.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.726 [2024-07-24 18:07:53.829818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.840445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.840781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.840814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.852804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.853245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.853273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.863472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.863719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.863762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.873847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.874228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.884730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.885203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.885232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.895982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.896397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.896427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.907690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.908079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.908116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.918719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.919082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.919116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.929718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.940828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.941256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.941284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.951393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.951862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.951891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.962609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.963049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.963078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.973865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.974284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.983855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.984304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.984333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.727 [2024-07-24 18:07:53.994778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.727 [2024-07-24 18:07:53.995135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.727 [2024-07-24 18:07:53.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.985 [2024-07-24 18:07:54.005169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.985 [2024-07-24 18:07:54.005513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.985 [2024-07-24 18:07:54.005544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.985 [2024-07-24 18:07:54.014865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.985 [2024-07-24 18:07:54.015259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.985 [2024-07-24 18:07:54.015288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.985 [2024-07-24 18:07:54.025042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e8270) with pdu=0x2000190fef90 00:25:07.985 [2024-07-24 18:07:54.025418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.985 [2024-07-24 18:07:54.025447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.985 00:25:07.985 Latency(us) 00:25:07.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.985 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:07.985 nvme0n1 : 2.01 2748.56 343.57 0.00 0.00 5807.81 2645.71 13689.74 00:25:07.985 =================================================================================================================== 00:25:07.985 Total : 2748.56 343.57 0.00 0.00 5807.81 2645.71 13689.74 00:25:07.985 0 00:25:07.985 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:07.985 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:07.985 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:07.985 | .driver_specific 00:25:07.985 | .nvme_error 00:25:07.985 | .status_code 00:25:07.985 | .command_transient_transport_error' 00:25:07.985 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 177 > 0 )) 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2885632 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2885632 ']' 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2885632 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885632 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885632' 00:25:08.244 killing process with pid 2885632 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2885632 00:25:08.244 Received shutdown signal, test time was about 2.000000 seconds 00:25:08.244 00:25:08.244 Latency(us) 00:25:08.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.244 =================================================================================================================== 00:25:08.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.244 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2885632 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2884008 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2884008 ']' 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2884008 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884008 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884008' 00:25:08.502 killing process with pid 2884008 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2884008 00:25:08.502 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2884008 00:25:08.762 00:25:08.762 real 0m17.695s 00:25:08.762 user 0m35.060s 00:25:08.762 sys 0m4.258s 00:25:08.762 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.762 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 ************************************ 00:25:08.762 END TEST nvmf_digest_error 00:25:08.762 ************************************ 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.763 18:07:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.763 rmmod nvme_tcp 00:25:08.763 rmmod nvme_fabrics 00:25:08.763 rmmod nvme_keyring 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2884008 ']' 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2884008 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2884008 ']' 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2884008 00:25:08.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2884008) - No such process 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2884008 is not found' 00:25:08.763 Process with pid 2884008 is not found 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.763 18:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.300 18:07:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.301 00:25:11.301 real 0m38.568s 00:25:11.301 user 1m9.072s 00:25:11.301 sys 0m9.914s 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:11.301 ************************************ 00:25:11.301 END TEST nvmf_digest 00:25:11.301 ************************************ 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.301 ************************************ 00:25:11.301 START TEST nvmf_bdevperf 00:25:11.301 ************************************ 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:11.301 * Looking for test storage... 00:25:11.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.301 18:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:13.205 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:13.205 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:13.205 Found net devices under 0000:09:00.0: cvl_0_0 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.205 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:13.205 Found net devices under 0000:09:00.1: cvl_0_1 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:25:13.206 00:25:13.206 --- 10.0.0.2 ping statistics --- 00:25:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.206 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:13.206 00:25:13.206 --- 10.0.0.1 ping statistics --- 00:25:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.206 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2888111 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2888111 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2888111 ']' 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:13.206 18:07:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:13.206 [2024-07-24 18:07:59.272922] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:13.206 [2024-07-24 18:07:59.273000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.206 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.206 [2024-07-24 18:07:59.341061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.206 [2024-07-24 18:07:59.463490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.206 [2024-07-24 18:07:59.463553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.206 [2024-07-24 18:07:59.463569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.206 [2024-07-24 18:07:59.463583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.206 [2024-07-24 18:07:59.463594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.206 [2024-07-24 18:07:59.463682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.206 [2024-07-24 18:07:59.467122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.206 [2024-07-24 18:07:59.467137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.141 [2024-07-24 18:08:00.252663] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.141 Malloc0 00:25:14.141 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.142 [2024-07-24 18:08:00.321100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:14.142 { 00:25:14.142 "params": { 00:25:14.142 "name": "Nvme$subsystem", 00:25:14.142 "trtype": "$TEST_TRANSPORT", 00:25:14.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.142 "adrfam": "ipv4", 00:25:14.142 "trsvcid": "$NVMF_PORT", 00:25:14.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.142 "hdgst": ${hdgst:-false}, 00:25:14.142 "ddgst": ${ddgst:-false} 00:25:14.142 }, 00:25:14.142 "method": "bdev_nvme_attach_controller" 00:25:14.142 } 00:25:14.142 EOF 00:25:14.142 )") 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:14.142 18:08:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:14.142 "params": { 00:25:14.142 "name": "Nvme1", 00:25:14.142 "trtype": "tcp", 00:25:14.142 "traddr": "10.0.0.2", 00:25:14.142 "adrfam": "ipv4", 00:25:14.142 "trsvcid": "4420", 00:25:14.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.142 "hdgst": false, 00:25:14.142 "ddgst": false 00:25:14.142 }, 00:25:14.142 "method": "bdev_nvme_attach_controller" 00:25:14.142 }' 00:25:14.142 [2024-07-24 18:08:00.371506] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:14.142 [2024-07-24 18:08:00.371577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888263 ] 00:25:14.142 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.400 [2024-07-24 18:08:00.430742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.400 [2024-07-24 18:08:00.544692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.685 Running I/O for 1 seconds... 00:25:15.620 00:25:15.620 Latency(us) 00:25:15.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:15.620 Verification LBA range: start 0x0 length 0x4000 00:25:15.620 Nvme1n1 : 1.01 8715.82 34.05 0.00 0.00 14622.80 2912.71 15049.01 00:25:15.620 =================================================================================================================== 00:25:15.620 Total : 8715.82 34.05 0.00 0.00 14622.80 2912.71 15049.01 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2888411 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:15.880 { 00:25:15.880 "params": { 00:25:15.880 "name": "Nvme$subsystem", 00:25:15.880 "trtype": "$TEST_TRANSPORT", 00:25:15.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.880 "adrfam": "ipv4", 00:25:15.880 "trsvcid": "$NVMF_PORT", 00:25:15.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.880 "hdgst": ${hdgst:-false}, 00:25:15.880 "ddgst": ${ddgst:-false} 00:25:15.880 }, 00:25:15.880 "method": "bdev_nvme_attach_controller" 00:25:15.880 } 00:25:15.880 EOF 00:25:15.880 )") 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:15.880 18:08:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:15.880 "params": { 00:25:15.880 "name": "Nvme1", 00:25:15.880 "trtype": "tcp", 00:25:15.880 "traddr": "10.0.0.2", 00:25:15.880 "adrfam": "ipv4", 00:25:15.880 "trsvcid": "4420", 00:25:15.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.880 "hdgst": false, 00:25:15.880 "ddgst": false 00:25:15.880 }, 00:25:15.880 "method": "bdev_nvme_attach_controller" 00:25:15.880 }' 00:25:15.880 [2024-07-24 18:08:02.110904] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:15.880 [2024-07-24 18:08:02.110992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888411 ] 00:25:15.880 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.138 [2024-07-24 18:08:02.172025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.138 [2024-07-24 18:08:02.285332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.396 Running I/O for 15 seconds... 00:25:18.924 18:08:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2888111 00:25:18.924 18:08:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:18.924 [2024-07-24 18:08:05.082243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.082287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-24 18:08:05.082883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.082915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.082950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.082968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.082983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.083020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.083054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.083086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.083130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.924 [2024-07-24 18:08:05.083176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.924 [2024-07-24 18:08:05.083192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.925 [2024-07-24 18:08:05.083351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.083975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.083992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.925 [2024-07-24 18:08:05.084488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.925 [2024-07-24 18:08:05.084505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.084981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.084998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.926 [2024-07-24 18:08:05.085255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.926 [2024-07-24 18:08:05.085283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.926 [2024-07-24 18:08:05.085789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.926 [2024-07-24 18:08:05.085806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.085838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.085870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.085902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.085934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.085966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.085981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.927 [2024-07-24 18:08:05.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a9830 is same with the state(6) to be set 00:25:18.927 [2024-07-24 18:08:05.086616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.927 [2024-07-24 18:08:05.086635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.927 [2024-07-24 18:08:05.086648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52480 len:8 PRP1 0x0 PRP2 0x0 00:25:18.927 [2024-07-24 18:08:05.086663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.927 [2024-07-24 18:08:05.086729] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7a9830 was disconnected and freed. reset controller. 00:25:18.927 [2024-07-24 18:08:05.090580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.927 [2024-07-24 18:08:05.090653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.927 [2024-07-24 18:08:05.091332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.927 [2024-07-24 18:08:05.091362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.927 [2024-07-24 18:08:05.091378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.927 [2024-07-24 18:08:05.091637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.927 [2024-07-24 18:08:05.091881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.927 [2024-07-24 18:08:05.091904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.927 [2024-07-24 18:08:05.091921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.927 [2024-07-24 18:08:05.095526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.927 [2024-07-24 18:08:05.104804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.927 [2024-07-24 18:08:05.105282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.927 [2024-07-24 18:08:05.105325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.927 [2024-07-24 18:08:05.105342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.927 [2024-07-24 18:08:05.105585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.927 [2024-07-24 18:08:05.105791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.927 [2024-07-24 18:08:05.105829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.927 [2024-07-24 18:08:05.105844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.927 [2024-07-24 18:08:05.109437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.927 [2024-07-24 18:08:05.118707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.927 [2024-07-24 18:08:05.119147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.927 [2024-07-24 18:08:05.119175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.927 [2024-07-24 18:08:05.119191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.927 [2024-07-24 18:08:05.119425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.927 [2024-07-24 18:08:05.119669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.927 [2024-07-24 18:08:05.119691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.927 [2024-07-24 18:08:05.119707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.927 [2024-07-24 18:08:05.123290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.928 [2024-07-24 18:08:05.132645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.928 [2024-07-24 18:08:05.133079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.928 [2024-07-24 18:08:05.133117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.928 [2024-07-24 18:08:05.133137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.928 [2024-07-24 18:08:05.133377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.928 [2024-07-24 18:08:05.133620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.928 [2024-07-24 18:08:05.133643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.928 [2024-07-24 18:08:05.133658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.928 [2024-07-24 18:08:05.137261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.928 [2024-07-24 18:08:05.146589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.928 [2024-07-24 18:08:05.147030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.928 [2024-07-24 18:08:05.147060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.928 [2024-07-24 18:08:05.147077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.928 [2024-07-24 18:08:05.147333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.928 [2024-07-24 18:08:05.147577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.928 [2024-07-24 18:08:05.147599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.928 [2024-07-24 18:08:05.147614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.928 [2024-07-24 18:08:05.151210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.928 [2024-07-24 18:08:05.160545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.928 [2024-07-24 18:08:05.160987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.928 [2024-07-24 18:08:05.161017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.928 [2024-07-24 18:08:05.161034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.928 [2024-07-24 18:08:05.161284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.928 [2024-07-24 18:08:05.161529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.928 [2024-07-24 18:08:05.161551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.928 [2024-07-24 18:08:05.161566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.928 [2024-07-24 18:08:05.165164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.928 [2024-07-24 18:08:05.174503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.928 [2024-07-24 18:08:05.174920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.928 [2024-07-24 18:08:05.174951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.928 [2024-07-24 18:08:05.174968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.928 [2024-07-24 18:08:05.175219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.928 [2024-07-24 18:08:05.175464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.928 [2024-07-24 18:08:05.175487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.928 [2024-07-24 18:08:05.175502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.928 [2024-07-24 18:08:05.179093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.928 [2024-07-24 18:08:05.188447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.928 [2024-07-24 18:08:05.188874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.928 [2024-07-24 18:08:05.188905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:18.928 [2024-07-24 18:08:05.188922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:18.928 [2024-07-24 18:08:05.189172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:18.928 [2024-07-24 18:08:05.189416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.928 [2024-07-24 18:08:05.189439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.928 [2024-07-24 18:08:05.189460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.193053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.202399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.202854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.202885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.202902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.203151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.203395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.203417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.203432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.207027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.216377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.216873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.216903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.216921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.217168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.217412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.217435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.217450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.221045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.230396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.230851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.230882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.230900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.231149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.231394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.231416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.231431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.235029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.244375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.244805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.244840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.244858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.245097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.245352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.245375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.245390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.248981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.258323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.258766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.258796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.258813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.259053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.259305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.259329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.259344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.262936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.272283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.272708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.272738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.272755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.272995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.273249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.273272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.273287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.276876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.286219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.286644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.286674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.286691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.286930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.287190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.187 [2024-07-24 18:08:05.287214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.187 [2024-07-24 18:08:05.287229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.187 [2024-07-24 18:08:05.290818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.187 [2024-07-24 18:08:05.300162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.187 [2024-07-24 18:08:05.300595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.187 [2024-07-24 18:08:05.300626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.187 [2024-07-24 18:08:05.300643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.187 [2024-07-24 18:08:05.300881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.187 [2024-07-24 18:08:05.301136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.301160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.301175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.304765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.314093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.314512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.314543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.314560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.314800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.315044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.315066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.315082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.318685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.328028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.328453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.328485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.328502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.328742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.328985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.329008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.329023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.332630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.341969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.342410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.342441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.342458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.342698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.342942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.342964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.342979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.346577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.355910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.356326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.356357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.356375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.356614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.356857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.356880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.356895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.360496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.369826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.370266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.370296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.370313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.370552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.370796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.370819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.370834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.374435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.383777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.384183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.384214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.384237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.384477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.384720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.384743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.384757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.388357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.397687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.398114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.398145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.398162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.398402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.398645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.398668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.398683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.402282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.411612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.412042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.412072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.412089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.412338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.412583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.412605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.412620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.416221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.425560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.425959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.425990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.426007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.426257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.426501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.426529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.426545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.188 [2024-07-24 18:08:05.430143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.188 [2024-07-24 18:08:05.439477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.188 [2024-07-24 18:08:05.439930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.188 [2024-07-24 18:08:05.439961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.188 [2024-07-24 18:08:05.439978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.188 [2024-07-24 18:08:05.440228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.188 [2024-07-24 18:08:05.440472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.188 [2024-07-24 18:08:05.440495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.188 [2024-07-24 18:08:05.440510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.189 [2024-07-24 18:08:05.444108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.189 [2024-07-24 18:08:05.453447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.189 [2024-07-24 18:08:05.453848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.189 [2024-07-24 18:08:05.453878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.189 [2024-07-24 18:08:05.453896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.189 [2024-07-24 18:08:05.454144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.189 [2024-07-24 18:08:05.454387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.189 [2024-07-24 18:08:05.454410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.189 [2024-07-24 18:08:05.454424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.447 [2024-07-24 18:08:05.458014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.447 [2024-07-24 18:08:05.467354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.447 [2024-07-24 18:08:05.467781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.447 [2024-07-24 18:08:05.467812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.447 [2024-07-24 18:08:05.467829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.447 [2024-07-24 18:08:05.468068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.447 [2024-07-24 18:08:05.468321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.447 [2024-07-24 18:08:05.468345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.447 [2024-07-24 18:08:05.468360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.447 [2024-07-24 18:08:05.471952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.447 [2024-07-24 18:08:05.481295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.447 [2024-07-24 18:08:05.481732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.447 [2024-07-24 18:08:05.481763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.447 [2024-07-24 18:08:05.481780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.447 [2024-07-24 18:08:05.482028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.447 [2024-07-24 18:08:05.482279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.447 [2024-07-24 18:08:05.482303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.447 [2024-07-24 18:08:05.482319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.447 [2024-07-24 18:08:05.485911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.447 [2024-07-24 18:08:05.495249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.447 [2024-07-24 18:08:05.495659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.447 [2024-07-24 18:08:05.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.447 [2024-07-24 18:08:05.495707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.447 [2024-07-24 18:08:05.495945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.447 [2024-07-24 18:08:05.496199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.447 [2024-07-24 18:08:05.496223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.447 [2024-07-24 18:08:05.496239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.447 [2024-07-24 18:08:05.499828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.447 [2024-07-24 18:08:05.509184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.447 [2024-07-24 18:08:05.509589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.447 [2024-07-24 18:08:05.509620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.509637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.509876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.510127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.510151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.510166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.513757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.523107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.523679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.523734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.523752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.523997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.524251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.524275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.524290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.527880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.537029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.537555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.537587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.537604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.537843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.538086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.538117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.538134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.541727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.551076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.551514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.551544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.551562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.551802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.552045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.552068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.552083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.555687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.565025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.565464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.565495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.565512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.565751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.565995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.566018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.566040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.569642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.578978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.579378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.579409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.579426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.579665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.579909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.579931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.579946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.583560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.592903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.593348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.593379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.593396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.593636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.593879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.593902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.593916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.597544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.606874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.607307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.607338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.607355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.607595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.607838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.607861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.607876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.611475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.620828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.621272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.621307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.621325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.621565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.621808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.621831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.621846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.625444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.634777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.635187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.635218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.635235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.448 [2024-07-24 18:08:05.635476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.448 [2024-07-24 18:08:05.635720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.448 [2024-07-24 18:08:05.635742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.448 [2024-07-24 18:08:05.635757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.448 [2024-07-24 18:08:05.639359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.448 [2024-07-24 18:08:05.648701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.448 [2024-07-24 18:08:05.649128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.448 [2024-07-24 18:08:05.649159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.448 [2024-07-24 18:08:05.649177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.449 [2024-07-24 18:08:05.649417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.449 [2024-07-24 18:08:05.649660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.449 [2024-07-24 18:08:05.649683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.449 [2024-07-24 18:08:05.649698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.449 [2024-07-24 18:08:05.653304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.449 [2024-07-24 18:08:05.662656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.449 [2024-07-24 18:08:05.663088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.449 [2024-07-24 18:08:05.663127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.449 [2024-07-24 18:08:05.663144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.449 [2024-07-24 18:08:05.663384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.449 [2024-07-24 18:08:05.663643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.449 [2024-07-24 18:08:05.663667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.449 [2024-07-24 18:08:05.663682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.449 [2024-07-24 18:08:05.667288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.449 [2024-07-24 18:08:05.676627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.449 [2024-07-24 18:08:05.677056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.449 [2024-07-24 18:08:05.677087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.449 [2024-07-24 18:08:05.677113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.449 [2024-07-24 18:08:05.677356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.449 [2024-07-24 18:08:05.677599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.449 [2024-07-24 18:08:05.677621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.449 [2024-07-24 18:08:05.677637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.449 [2024-07-24 18:08:05.681237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.449 [2024-07-24 18:08:05.690584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.449 [2024-07-24 18:08:05.691079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.449 [2024-07-24 18:08:05.691117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.449 [2024-07-24 18:08:05.691136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.449 [2024-07-24 18:08:05.691376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.449 [2024-07-24 18:08:05.691619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.449 [2024-07-24 18:08:05.691642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.449 [2024-07-24 18:08:05.691657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.449 [2024-07-24 18:08:05.695259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.449 [2024-07-24 18:08:05.704606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.449 [2024-07-24 18:08:05.705016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.449 [2024-07-24 18:08:05.705048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.449 [2024-07-24 18:08:05.705065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.449 [2024-07-24 18:08:05.705322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.449 [2024-07-24 18:08:05.705567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.449 [2024-07-24 18:08:05.705590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.449 [2024-07-24 18:08:05.705605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.449 [2024-07-24 18:08:05.709220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.707 [2024-07-24 18:08:05.718652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.707 [2024-07-24 18:08:05.719089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.707 [2024-07-24 18:08:05.719128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.707 [2024-07-24 18:08:05.719146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.707 [2024-07-24 18:08:05.719386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.707 [2024-07-24 18:08:05.719630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.707 [2024-07-24 18:08:05.719653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.707 [2024-07-24 18:08:05.719668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.707 [2024-07-24 18:08:05.723277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.707 [2024-07-24 18:08:05.732624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.707 [2024-07-24 18:08:05.733053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.707 [2024-07-24 18:08:05.733083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.707 [2024-07-24 18:08:05.733100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.733350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.733594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.733617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.733632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.737236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.746579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.747028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.747059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.747076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.747325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.747569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.747592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.747607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.751208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.760544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.760972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.761025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.761278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.761522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.761545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.761560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.765164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.774492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.774918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.774948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.774965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.775217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.775461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.775484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.775499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.779093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.788456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.788893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.788924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.788941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.789193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.789437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.789460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.789475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.793073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.802423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.802908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.802938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.802955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.803206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.803450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.803481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.803497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.807097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.816482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.816907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.816947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.816964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.817214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.817459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.817481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.817497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.821100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.830465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.831025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.831097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.831126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.831366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.831609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.831632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.831646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.835260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.844420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.844830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.844861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.844878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.845129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.845374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.845397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.845412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.849012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.858381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.858819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.858850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.858867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.708 [2024-07-24 18:08:05.859119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.708 [2024-07-24 18:08:05.859363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.708 [2024-07-24 18:08:05.859386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.708 [2024-07-24 18:08:05.859402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.708 [2024-07-24 18:08:05.862994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.708 [2024-07-24 18:08:05.872347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.708 [2024-07-24 18:08:05.872774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.708 [2024-07-24 18:08:05.872805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.708 [2024-07-24 18:08:05.872822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.873061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.873313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.873337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.873351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.876947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.886316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.886725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.886756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.886773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.887013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.887265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.887289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.887304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.890900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.900248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.900684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.900714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.900731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.900977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.901234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.901258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.901273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.904868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.914213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.914697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.914727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.914744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.914984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.915237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.915261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.915275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.918873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.928219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.928713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.928743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.928761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.929000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.929252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.929276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.929291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.932892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.942244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.942803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.942856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.942873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.943121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.943365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.943387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.943408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.947007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.956159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.956595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.956625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.956642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.956882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.957137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.957167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.957182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.960775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.709 [2024-07-24 18:08:05.970129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.709 [2024-07-24 18:08:05.970565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.709 [2024-07-24 18:08:05.970596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.709 [2024-07-24 18:08:05.970613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.709 [2024-07-24 18:08:05.970852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.709 [2024-07-24 18:08:05.971095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.709 [2024-07-24 18:08:05.971128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.709 [2024-07-24 18:08:05.971143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.709 [2024-07-24 18:08:05.974739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.968 [2024-07-24 18:08:05.984092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:05.984502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:05.984533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:05.984550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:05.984788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:05.985031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:05.985054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:05.985069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:05.988673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:05.998012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:05.998441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:05.998476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:05.998494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:05.998733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:05.998976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:05.998999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:05.999013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.002618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.011962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.012406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.012454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.012693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.012936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.012959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.012974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.016581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.025935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.026375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.026405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.026423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.026661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.026905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.026928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.026943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.030546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.039881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.040313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.040343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.040360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.040599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.040849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.040872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.040887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.044489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.053830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.054259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.054289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.054306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.054545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.054789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.054811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.054827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.058434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.067788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.068223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.068255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.068273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.068512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.068755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.068778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.068793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.072395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.081731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.082139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.082170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.082187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.082427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.082670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.082692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.082707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.086332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.095670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.096093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.096167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.096185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.096425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.096668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.096691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.096706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.100311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.109751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.969 [2024-07-24 18:08:06.110178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.969 [2024-07-24 18:08:06.110210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.969 [2024-07-24 18:08:06.110227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.969 [2024-07-24 18:08:06.110467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.969 [2024-07-24 18:08:06.110711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.969 [2024-07-24 18:08:06.110733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.969 [2024-07-24 18:08:06.110749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.969 [2024-07-24 18:08:06.114358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.969 [2024-07-24 18:08:06.123710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.124167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.124199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.124216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.124456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.124700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.124722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.124737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.128334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.137670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.138076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.138114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.138139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.138380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.138624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.138646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.138661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.142266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.151604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.152031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.152061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.152078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.152326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.152571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.152593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.152608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.156212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.165558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.165998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.166028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.166045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.166295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.166540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.166563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.166579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.170183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.179509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.179937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.179967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.179985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.180236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.180480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.180508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.180524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.184127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.193484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.193968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.193999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.194016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.194267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.194511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.194534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.194549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.198148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.207494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.207932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.207962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.207979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.208230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.208475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.208498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.208513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.212120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.221470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.221962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.222011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.222029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.222280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.222524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.222547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.222562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.970 [2024-07-24 18:08:06.226167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.970 [2024-07-24 18:08:06.235509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.970 [2024-07-24 18:08:06.235993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.970 [2024-07-24 18:08:06.236023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:19.970 [2024-07-24 18:08:06.236040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:19.970 [2024-07-24 18:08:06.236291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:19.970 [2024-07-24 18:08:06.236535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.970 [2024-07-24 18:08:06.236558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.970 [2024-07-24 18:08:06.236574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.240232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.249391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.249825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-24 18:08:06.249873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.229 [2024-07-24 18:08:06.249891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.229 [2024-07-24 18:08:06.250146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.229 [2024-07-24 18:08:06.250391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.229 [2024-07-24 18:08:06.250414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.229 [2024-07-24 18:08:06.250429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.254033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.263392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.263895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-24 18:08:06.263926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.229 [2024-07-24 18:08:06.263943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.229 [2024-07-24 18:08:06.264193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.229 [2024-07-24 18:08:06.264449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.229 [2024-07-24 18:08:06.264472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.229 [2024-07-24 18:08:06.264486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.268079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.277449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.277950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-24 18:08:06.277981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.229 [2024-07-24 18:08:06.277998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.229 [2024-07-24 18:08:06.278254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.229 [2024-07-24 18:08:06.278498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.229 [2024-07-24 18:08:06.278520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.229 [2024-07-24 18:08:06.278535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.282135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.291486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.291912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-24 18:08:06.291942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.229 [2024-07-24 18:08:06.291959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.229 [2024-07-24 18:08:06.292210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.229 [2024-07-24 18:08:06.292454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.229 [2024-07-24 18:08:06.292477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.229 [2024-07-24 18:08:06.292492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.296087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.305441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.305867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-24 18:08:06.305897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.229 [2024-07-24 18:08:06.305915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.229 [2024-07-24 18:08:06.306167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.229 [2024-07-24 18:08:06.306411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.229 [2024-07-24 18:08:06.306434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.229 [2024-07-24 18:08:06.306449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.229 [2024-07-24 18:08:06.310043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.229 [2024-07-24 18:08:06.319396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.229 [2024-07-24 18:08:06.319825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.319856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.319873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.320128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.320373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.320395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.320416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.324009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.333355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.333769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.333799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.333816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.334056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.334309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.334333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.334347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.337939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.347284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.347691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.347722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.347740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.347979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.348234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.348258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.348273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.351867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.361209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.361634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.361664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.361681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.361920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.362175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.362198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.362213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.365802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.375149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.375586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.375622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.375640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.375880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.376136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.376160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.376174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.379767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.389127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.389565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.389595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.389612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.389851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.390095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.390128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.390143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.393735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.403077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.403567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.403598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.403615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.403854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.404097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.404130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.404145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.407738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.417116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.417559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.417591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.417608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.417848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.418097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.418131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.418147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.421747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.431088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.431526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.431557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.431574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.431813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.432057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.432079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.432094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.435697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.445035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.445448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-24 18:08:06.445479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.230 [2024-07-24 18:08:06.445496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.230 [2024-07-24 18:08:06.445736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.230 [2024-07-24 18:08:06.445979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.230 [2024-07-24 18:08:06.446002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.230 [2024-07-24 18:08:06.446017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.230 [2024-07-24 18:08:06.449619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.230 [2024-07-24 18:08:06.458965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.230 [2024-07-24 18:08:06.459389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-24 18:08:06.459421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.231 [2024-07-24 18:08:06.459438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.231 [2024-07-24 18:08:06.459677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.231 [2024-07-24 18:08:06.459921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.231 [2024-07-24 18:08:06.459944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.231 [2024-07-24 18:08:06.459959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.231 [2024-07-24 18:08:06.463573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.231 [2024-07-24 18:08:06.472912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.231 [2024-07-24 18:08:06.473366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-24 18:08:06.473396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.231 [2024-07-24 18:08:06.473414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.231 [2024-07-24 18:08:06.473653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.231 [2024-07-24 18:08:06.473896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.231 [2024-07-24 18:08:06.473918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.231 [2024-07-24 18:08:06.473934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.231 [2024-07-24 18:08:06.477530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.231 [2024-07-24 18:08:06.486878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.231 [2024-07-24 18:08:06.487292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-24 18:08:06.487323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.231 [2024-07-24 18:08:06.487340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.231 [2024-07-24 18:08:06.487579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.231 [2024-07-24 18:08:06.487822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.231 [2024-07-24 18:08:06.487844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.231 [2024-07-24 18:08:06.487859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.231 [2024-07-24 18:08:06.491466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.500794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.501197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.501228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.501246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.501485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.501729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.490 [2024-07-24 18:08:06.501751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.490 [2024-07-24 18:08:06.501766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.490 [2024-07-24 18:08:06.505374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.514710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.515120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.515151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.515176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.515416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.515659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.490 [2024-07-24 18:08:06.515682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.490 [2024-07-24 18:08:06.515697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.490 [2024-07-24 18:08:06.519301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.528636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.529065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.529095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.529123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.529364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.529607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.490 [2024-07-24 18:08:06.529630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.490 [2024-07-24 18:08:06.529645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.490 [2024-07-24 18:08:06.533246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.542588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.543014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.543044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.543061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.543310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.543554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.490 [2024-07-24 18:08:06.543576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.490 [2024-07-24 18:08:06.543592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.490 [2024-07-24 18:08:06.547190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.556520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.556932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.556962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.556979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.557230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.557474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.490 [2024-07-24 18:08:06.557503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.490 [2024-07-24 18:08:06.557519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.490 [2024-07-24 18:08:06.561119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.490 [2024-07-24 18:08:06.570455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.490 [2024-07-24 18:08:06.570856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.490 [2024-07-24 18:08:06.570886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.490 [2024-07-24 18:08:06.570903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.490 [2024-07-24 18:08:06.571155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.490 [2024-07-24 18:08:06.571399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.571421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.571435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.575026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.584376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.584797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.584827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.584844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.585083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.585336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.585360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.585375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.588984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.598331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.598735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.598765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.598782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.599021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.599276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.599299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.599314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.602906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.612263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.612665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.612696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.612713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.612953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.613207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.613231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.613246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.616840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.626199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.626603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.626633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.626650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.626890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.627144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.627168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.627183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.630776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.640131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.640559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.640589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.640606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.640845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.641089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.641124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.641140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.644736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.654084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.654575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.654606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.654623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.654868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.655120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.655144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.655159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.658758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.668116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.668547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.668588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.668605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.668845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.669088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.669123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.669139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.672778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.682151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.682581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.682613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.682630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.682871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.683126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.683161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.491 [2024-07-24 18:08:06.683176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.491 [2024-07-24 18:08:06.686783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.491 [2024-07-24 18:08:06.696118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.491 [2024-07-24 18:08:06.696523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.491 [2024-07-24 18:08:06.696554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.491 [2024-07-24 18:08:06.696571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.491 [2024-07-24 18:08:06.696811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.491 [2024-07-24 18:08:06.697055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.491 [2024-07-24 18:08:06.697077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.492 [2024-07-24 18:08:06.697099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.492 [2024-07-24 18:08:06.700706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.492 [2024-07-24 18:08:06.710043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.492 [2024-07-24 18:08:06.710453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.492 [2024-07-24 18:08:06.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.492 [2024-07-24 18:08:06.710501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.492 [2024-07-24 18:08:06.710741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.492 [2024-07-24 18:08:06.710985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.492 [2024-07-24 18:08:06.711007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.492 [2024-07-24 18:08:06.711022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.492 [2024-07-24 18:08:06.714624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.492 [2024-07-24 18:08:06.723953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.492 [2024-07-24 18:08:06.724366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.492 [2024-07-24 18:08:06.724397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.492 [2024-07-24 18:08:06.724414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.492 [2024-07-24 18:08:06.724653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.492 [2024-07-24 18:08:06.724896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.492 [2024-07-24 18:08:06.724919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.492 [2024-07-24 18:08:06.724934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.492 [2024-07-24 18:08:06.728533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.492 [2024-07-24 18:08:06.737890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.492 [2024-07-24 18:08:06.738302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.492 [2024-07-24 18:08:06.738334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.492 [2024-07-24 18:08:06.738351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.492 [2024-07-24 18:08:06.738590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.492 [2024-07-24 18:08:06.738833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.492 [2024-07-24 18:08:06.738855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.492 [2024-07-24 18:08:06.738871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.492 [2024-07-24 18:08:06.742472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.492 [2024-07-24 18:08:06.751821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.492 [2024-07-24 18:08:06.752230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.492 [2024-07-24 18:08:06.752265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.492 [2024-07-24 18:08:06.752284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.492 [2024-07-24 18:08:06.752523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.492 [2024-07-24 18:08:06.752766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.492 [2024-07-24 18:08:06.752789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.492 [2024-07-24 18:08:06.752803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.492 [2024-07-24 18:08:06.756418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.765758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.766183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.766215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.766232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.766472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.766716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.766738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.766753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.770353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.779691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.780127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.780158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.780175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.780415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.780658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.780681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.780696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.784298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.793665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.794074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.794111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.794131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.794371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.794620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.794643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.794657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.798259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.807589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.807996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.808026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.808043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.808294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.808539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.808561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.808576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.812177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.821514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.821945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.821975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.821992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.822242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.822487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.822509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.822524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.826124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.835462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.835948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.835978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.835995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.836246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.836490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.836513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.836528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.840131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.849469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.849877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.849908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.849925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.850175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.850419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.850441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.850456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.854048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.863386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.751 [2024-07-24 18:08:06.863799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.751 [2024-07-24 18:08:06.863829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.751 [2024-07-24 18:08:06.863846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.751 [2024-07-24 18:08:06.864086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.751 [2024-07-24 18:08:06.864356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.751 [2024-07-24 18:08:06.864379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.751 [2024-07-24 18:08:06.864394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.751 [2024-07-24 18:08:06.867986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.751 [2024-07-24 18:08:06.877323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.877805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.877835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.877852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.878091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.878344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.878367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.878382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.881973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.891321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.891756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.891787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.891810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.892050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.892303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.892327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.892342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.895931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.905268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.905698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.905745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.905984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.906239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.906263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.906278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.909870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.919208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.919610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.919640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.919657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.919896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.920155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.920179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.920194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.923785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.933143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.933574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.933605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.933622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.933861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.934115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.934153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.934169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.937765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.947114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.947529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.947559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.947576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.947816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.948060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.948083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.948098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.951707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.961040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.961476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.961506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.961524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.961763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.962006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.962029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.962044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.965669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.975015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.975453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.975483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.975500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.975739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.975983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.976006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.976021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.979624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:06.988987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:06.989434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:06.989465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:06.989483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:06.989722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:06.989965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:06.989988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:06.990003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.752 [2024-07-24 18:08:06.993607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.752 [2024-07-24 18:08:07.002953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.752 [2024-07-24 18:08:07.003399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.752 [2024-07-24 18:08:07.003429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.752 [2024-07-24 18:08:07.003446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.752 [2024-07-24 18:08:07.003685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.752 [2024-07-24 18:08:07.003929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.752 [2024-07-24 18:08:07.003952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.752 [2024-07-24 18:08:07.003966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.753 [2024-07-24 18:08:07.007568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.753 [2024-07-24 18:08:07.016909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.753 [2024-07-24 18:08:07.017299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.753 [2024-07-24 18:08:07.017329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:20.753 [2024-07-24 18:08:07.017346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:20.753 [2024-07-24 18:08:07.017585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:20.753 [2024-07-24 18:08:07.017829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.753 [2024-07-24 18:08:07.017851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.753 [2024-07-24 18:08:07.017866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.021476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.030821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.031224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.031256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.031273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.031519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.031763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.031786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.031802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.035404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.044741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.045172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.045203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.045220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.045460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.045703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.045727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.045742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.049344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.058678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.059109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.059140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.059157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.059396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.059640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.059663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.059677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.063279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.072615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.073053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.073083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.073100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.073349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.073593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.073617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.073638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.077239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.086588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.087020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.087050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.087068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.087315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.087559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.087582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.087597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.091208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.100578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.101038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.101069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.101086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.101333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.101577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.101599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.101614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.105212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.114555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.114989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.115019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.115036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.115285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.115529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.115552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.115567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.119162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.011 [2024-07-24 18:08:07.128726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.011 [2024-07-24 18:08:07.129160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.011 [2024-07-24 18:08:07.129192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.011 [2024-07-24 18:08:07.129209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.011 [2024-07-24 18:08:07.129449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.011 [2024-07-24 18:08:07.129693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.011 [2024-07-24 18:08:07.129715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.011 [2024-07-24 18:08:07.129730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.011 [2024-07-24 18:08:07.133334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.142675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.143121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.143152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.143172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.143412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.143655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.143678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.143693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.147297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.156654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.157151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.157182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.157199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.157438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.157681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.157704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.157718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.161329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.170658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.171091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.171128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.171146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.171385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.171633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.171656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.171672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.175271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.184608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.185034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.185064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.185082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.185329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.185573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.185595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.185610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.189226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.198560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.198965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.198995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.199013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.199263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.199508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.199530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.199546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.203141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.212485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.212908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.212939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.212956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.213206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.213450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.213472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.213488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.217090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.226435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.226823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.226853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.226870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.227117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.227362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.227385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.227401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.230992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.240331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.240762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.240793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.240810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.241049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.241303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.241326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.241341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.244934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.254278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.254705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.254736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.254753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.254992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.255246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.255270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.255285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.258877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.012 [2024-07-24 18:08:07.268203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.012 [2024-07-24 18:08:07.268613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.012 [2024-07-24 18:08:07.268643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.012 [2024-07-24 18:08:07.268666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.012 [2024-07-24 18:08:07.268906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.012 [2024-07-24 18:08:07.269161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.012 [2024-07-24 18:08:07.269185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.012 [2024-07-24 18:08:07.269199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.012 [2024-07-24 18:08:07.272791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.270 [2024-07-24 18:08:07.282128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.270 [2024-07-24 18:08:07.282550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.270 [2024-07-24 18:08:07.282580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.282597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.282836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.283079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.283113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.283131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.286727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.296069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.296480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.296511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.296529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.296768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.297011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.297034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.297048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.300655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.309983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.310413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.310443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.310461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.310700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.310943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.310972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.310988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.314591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.323931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.324341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.324372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.324389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.324628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.324871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.324894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.324909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.328510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.337841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.338252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.338284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.338301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.338540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.338783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.338806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.338821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.342421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.351753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.352163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.352194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.352211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.352451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.352694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.352717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.352732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.356332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.365671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.366082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.366119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.366137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.366377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.366620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.366643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.366658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.370255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.379602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.380011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.380059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.380309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.380553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.380575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.380590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.384187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.393526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.393915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.393945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.393962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.394213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.394457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.394480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.394495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.398088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.407433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.407871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.407902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.407920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.271 [2024-07-24 18:08:07.408177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.271 [2024-07-24 18:08:07.408433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.271 [2024-07-24 18:08:07.408456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.271 [2024-07-24 18:08:07.408471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.271 [2024-07-24 18:08:07.412064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.271 [2024-07-24 18:08:07.421410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.271 [2024-07-24 18:08:07.421848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.271 [2024-07-24 18:08:07.421879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.271 [2024-07-24 18:08:07.421897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.422147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.422392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.422415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.422430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.426025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.435366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.435743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.435774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.435791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.436030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.436284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.436307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.436322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.439912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.449252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.449683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.449713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.449730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.449969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.450223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.450247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.450269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.453858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.463196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.463617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.463648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.463665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.463905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.464157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.464181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.464196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.467788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.477127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.477552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.477583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.477600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.477839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.478082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.478114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.478131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.481726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.491063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.491485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.491516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.491532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.491772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.492015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.492038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.492053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.495651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.504989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.505414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.505444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.505461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.505701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.505944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.505966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.505982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.509582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.518923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.519354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.519385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.519402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.519641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.519883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.519906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.519921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.523525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.272 [2024-07-24 18:08:07.532852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.272 [2024-07-24 18:08:07.533288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.272 [2024-07-24 18:08:07.533319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.272 [2024-07-24 18:08:07.533336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.272 [2024-07-24 18:08:07.533576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.272 [2024-07-24 18:08:07.533819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.272 [2024-07-24 18:08:07.533841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.272 [2024-07-24 18:08:07.533856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.272 [2024-07-24 18:08:07.537457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.546785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.547229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.547261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.547278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.547518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.547767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.547790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.547805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.551406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.560735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.561146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.561177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.561194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.561433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.561677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.561699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.561714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.565314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.574646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.575075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.575112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.575131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.575371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.575614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.575636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.575651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.579250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.588583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.589004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.589034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.589051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.589301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.589555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.589578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.589593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.593200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.602534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.602944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.602974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.602991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.603241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.603485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.603508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.603523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.607118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.616452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.616895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.616926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.616943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.617194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.617438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.531 [2024-07-24 18:08:07.617461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.531 [2024-07-24 18:08:07.617476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.531 [2024-07-24 18:08:07.621071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.531 [2024-07-24 18:08:07.630410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.531 [2024-07-24 18:08:07.630793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.531 [2024-07-24 18:08:07.630824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.531 [2024-07-24 18:08:07.630841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.531 [2024-07-24 18:08:07.631080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.531 [2024-07-24 18:08:07.631332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.631356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.631371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.634964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.644304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.644736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.644766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.644789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.645029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.645282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.645306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.645321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.648909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.658248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.658679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.658709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.658726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.658965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.659219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.659243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.659258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.662848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.672188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.672602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.672632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.672650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.672889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.673146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.673170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.673185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.676775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.686118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.686501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.686532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.686549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.686789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.687032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.687061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.687076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.690691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.700099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.700547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.700577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.700594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.700834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.701077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.701100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.701124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.704719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.714059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.714494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.714525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.714542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.714782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.715025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.715048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.715063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.718667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.728009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.728413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.728445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.728462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.728702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.728945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.728968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.728983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.732584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.741931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.742441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.742472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.742489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.742728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.742973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.742995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.743010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.746615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.755963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.756407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.756437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.756454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.756693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.756937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.756959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.532 [2024-07-24 18:08:07.756974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.532 [2024-07-24 18:08:07.760602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.532 [2024-07-24 18:08:07.769952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.532 [2024-07-24 18:08:07.770369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.532 [2024-07-24 18:08:07.770400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.532 [2024-07-24 18:08:07.770417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.532 [2024-07-24 18:08:07.770657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.532 [2024-07-24 18:08:07.770900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.532 [2024-07-24 18:08:07.770923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.533 [2024-07-24 18:08:07.770938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.533 [2024-07-24 18:08:07.774535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.533 [2024-07-24 18:08:07.783864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.533 [2024-07-24 18:08:07.784306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.533 [2024-07-24 18:08:07.784336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.533 [2024-07-24 18:08:07.784354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.533 [2024-07-24 18:08:07.784600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.533 [2024-07-24 18:08:07.784843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.533 [2024-07-24 18:08:07.784866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.533 [2024-07-24 18:08:07.784881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.533 [2024-07-24 18:08:07.788478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.533 [2024-07-24 18:08:07.797825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.533 [2024-07-24 18:08:07.798245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.533 [2024-07-24 18:08:07.798276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.533 [2024-07-24 18:08:07.798294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.533 [2024-07-24 18:08:07.798534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.533 [2024-07-24 18:08:07.798777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.533 [2024-07-24 18:08:07.798799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.533 [2024-07-24 18:08:07.798813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.802415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.811748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.812189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.812220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.812237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.812476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.792 [2024-07-24 18:08:07.812718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.792 [2024-07-24 18:08:07.812741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.792 [2024-07-24 18:08:07.812757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.816361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.825714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.826150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.826181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.826199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.826438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.792 [2024-07-24 18:08:07.826681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.792 [2024-07-24 18:08:07.826703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.792 [2024-07-24 18:08:07.826725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.830332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.839677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.840081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.840120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.840139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.840379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.792 [2024-07-24 18:08:07.840623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.792 [2024-07-24 18:08:07.840646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.792 [2024-07-24 18:08:07.840661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.844268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.853613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.854032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.854062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.854079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.854328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.792 [2024-07-24 18:08:07.854572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.792 [2024-07-24 18:08:07.854595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.792 [2024-07-24 18:08:07.854610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.858213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.867555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.867975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.868005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.868022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.868272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.792 [2024-07-24 18:08:07.868516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.792 [2024-07-24 18:08:07.868539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.792 [2024-07-24 18:08:07.868554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.792 [2024-07-24 18:08:07.872152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.792 [2024-07-24 18:08:07.881490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.792 [2024-07-24 18:08:07.881880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.792 [2024-07-24 18:08:07.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.792 [2024-07-24 18:08:07.881927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.792 [2024-07-24 18:08:07.882180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.882424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.882446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.882462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.886055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.895453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.895862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.895893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.895911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.896161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.896405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.896428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.896443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.900030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.909382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.909791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.909822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.909839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.910079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.910332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.910355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.910370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.913958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.923315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.923731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.923762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.923779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.924019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.924280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.924304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.924319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.927916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.937260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.937682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.937713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.937730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.937970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.938226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.938249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.938265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.941859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.951208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.951646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.951676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.951694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.951932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.952188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.952212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.952227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.955820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.965183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.965610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.965641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.965658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.965898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.966154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.966177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.966192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.969795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.979144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.979586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.979616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.979633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.979872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.980126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.980149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.980164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.983753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:07.993117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:07.993546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:07.993576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:07.993593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:07.993832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:07.994075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:07.994098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:07.994126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:07.997720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:08.007058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:08.007475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:08.007505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.793 [2024-07-24 18:08:08.007522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.793 [2024-07-24 18:08:08.007762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.793 [2024-07-24 18:08:08.008004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.793 [2024-07-24 18:08:08.008027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.793 [2024-07-24 18:08:08.008042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.793 [2024-07-24 18:08:08.011647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.793 [2024-07-24 18:08:08.021012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.793 [2024-07-24 18:08:08.021429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.793 [2024-07-24 18:08:08.021460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.794 [2024-07-24 18:08:08.021484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.794 [2024-07-24 18:08:08.021723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.794 [2024-07-24 18:08:08.021966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.794 [2024-07-24 18:08:08.021989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.794 [2024-07-24 18:08:08.022004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.794 [2024-07-24 18:08:08.025604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.794 [2024-07-24 18:08:08.034940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.794 [2024-07-24 18:08:08.035392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.794 [2024-07-24 18:08:08.035423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.794 [2024-07-24 18:08:08.035440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.794 [2024-07-24 18:08:08.035679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.794 [2024-07-24 18:08:08.035923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.794 [2024-07-24 18:08:08.035945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.794 [2024-07-24 18:08:08.035961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.794 [2024-07-24 18:08:08.039561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.794 [2024-07-24 18:08:08.048902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.794 [2024-07-24 18:08:08.049320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.794 [2024-07-24 18:08:08.049352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:21.794 [2024-07-24 18:08:08.049369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:21.794 [2024-07-24 18:08:08.049608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:21.794 [2024-07-24 18:08:08.049851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.794 [2024-07-24 18:08:08.049874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.794 [2024-07-24 18:08:08.049889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.794 [2024-07-24 18:08:08.053492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.054 [2024-07-24 18:08:08.062821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.054 [2024-07-24 18:08:08.063256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.054 [2024-07-24 18:08:08.063286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.054 [2024-07-24 18:08:08.063304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.054 [2024-07-24 18:08:08.063543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.054 [2024-07-24 18:08:08.063786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.054 [2024-07-24 18:08:08.063815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.054 [2024-07-24 18:08:08.063831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.054 [2024-07-24 18:08:08.067438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2888111 Killed "${NVMF_APP[@]}" "$@" 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.054 [2024-07-24 18:08:08.076782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.054 [2024-07-24 18:08:08.077191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.054 [2024-07-24 18:08:08.077223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.054 [2024-07-24 18:08:08.077240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.054 [2024-07-24 18:08:08.077480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.054 [2024-07-24 18:08:08.077723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.054 [2024-07-24 18:08:08.077747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.054 [2024-07-24 18:08:08.077762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2889078 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2889078 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2889078 ']' 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.054 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.054 [2024-07-24 18:08:08.081366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.054 [2024-07-24 18:08:08.090721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.054 [2024-07-24 18:08:08.091152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.054 [2024-07-24 18:08:08.091182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.054 [2024-07-24 18:08:08.091200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.054 [2024-07-24 18:08:08.091441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.054 [2024-07-24 18:08:08.091700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.054 [2024-07-24 18:08:08.091725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.054 [2024-07-24 18:08:08.091740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.095345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.104699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.105144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.105176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.105194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.105435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.105679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.105703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.105718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.109340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.118681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.119093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.119132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.119149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.119390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.119633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.119656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.119671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.122741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.129712] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:22.055 [2024-07-24 18:08:08.129783] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.055 [2024-07-24 18:08:08.132028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.132440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.132468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.132484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.132713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.132936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.132961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.132976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.136208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.145567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.146046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.146074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.146090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.146313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.146553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.146572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.146585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.149733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.158908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.159305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.159333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.159348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.159591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.159790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.159809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.159822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.162983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.055 [2024-07-24 18:08:08.172870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.173282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.173311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.173327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.173574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.173774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.173793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.173805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.177329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.186826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.187290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.187317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.187333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.187586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.187786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.187805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.187817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.191286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.200622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.200755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:22.055 [2024-07-24 18:08:08.201052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.201080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.201097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.201347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.201564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.201583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.201595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.205076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.214430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.214992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.215030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.055 [2024-07-24 18:08:08.215049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.055 [2024-07-24 18:08:08.215307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.055 [2024-07-24 18:08:08.215551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.055 [2024-07-24 18:08:08.215571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.055 [2024-07-24 18:08:08.215586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.055 [2024-07-24 18:08:08.219100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.055 [2024-07-24 18:08:08.228284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.055 [2024-07-24 18:08:08.228703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.055 [2024-07-24 18:08:08.228744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.228771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.229027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.229234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.229253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.229266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.232784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.242077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.242531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.242559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.242575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.242818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.243033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.243052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.243065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.246623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.255947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.256381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.256408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.256425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.256666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.256880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.256899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.256912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.260420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.269737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.270306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.270359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.270378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.270637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.270840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.270870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.270885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.274374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.283690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.284207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.284239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.284257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.284517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.284733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.284752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.284764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.288239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.297612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.298056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.298087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.298115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.298374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.298608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.298628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.298641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.302156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.311443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.056 [2024-07-24 18:08:08.311891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.056 [2024-07-24 18:08:08.311918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.056 [2024-07-24 18:08:08.311933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.056 [2024-07-24 18:08:08.312177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.056 [2024-07-24 18:08:08.312377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.056 [2024-07-24 18:08:08.312396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.056 [2024-07-24 18:08:08.312409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.056 [2024-07-24 18:08:08.315901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.056 [2024-07-24 18:08:08.317992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.056 [2024-07-24 18:08:08.318029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.056 [2024-07-24 18:08:08.318046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.056 [2024-07-24 18:08:08.318060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.056 [2024-07-24 18:08:08.318085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.056 [2024-07-24 18:08:08.318313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.056 [2024-07-24 18:08:08.318338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.056 [2024-07-24 18:08:08.318341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.317 [2024-07-24 18:08:08.325027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.325495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.325528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.325546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.325782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.325999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.326020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.326035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.329483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.338500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.339055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.339092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.339121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.339351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.339583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.339604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.339620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.342803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.352006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.352571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.352610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.352629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.352869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.353087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.353125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.353142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.356305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.365556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.366126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.366166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.366185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.366427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.366644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.366665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.366681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.369907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.379043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.379568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.379602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.379620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.379858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.380075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.380095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.380135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.383325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.392511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.393066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.393112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.393133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.393360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.393593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.393613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.393629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.396810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.406018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.406526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.406554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.406569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.406787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.407016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.407036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.407049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.410256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.419602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.419998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.420026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.420042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.420277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.420497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.420518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.420531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 [2024-07-24 18:08:08.423806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 [2024-07-24 18:08:08.433302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.433678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.433706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.317 [2024-07-24 18:08:08.433722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.317 [2024-07-24 18:08:08.433937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.317 [2024-07-24 18:08:08.434165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.317 [2024-07-24 18:08:08.434186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.317 [2024-07-24 18:08:08.434200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.317 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.317 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:22.317 [2024-07-24 18:08:08.437508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.317 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.317 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.317 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.317 [2024-07-24 18:08:08.446746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.317 [2024-07-24 18:08:08.447140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.317 [2024-07-24 18:08:08.447168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.447184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.447400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.447629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.447650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.447663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 [2024-07-24 18:08:08.450889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 [2024-07-24 18:08:08.460259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 [2024-07-24 18:08:08.460645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.318 [2024-07-24 18:08:08.460675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.460690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.460920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.461159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.461181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.461195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 [2024-07-24 18:08:08.464379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 [2024-07-24 18:08:08.466625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.318 [2024-07-24 18:08:08.473865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 [2024-07-24 18:08:08.474268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.318 [2024-07-24 18:08:08.474296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.474311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.474540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.474753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.474772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.474786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 [2024-07-24 18:08:08.478023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 [2024-07-24 18:08:08.487583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 [2024-07-24 18:08:08.487986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.318 [2024-07-24 18:08:08.488015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.488032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.488261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.488495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.488516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.488530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 [2024-07-24 18:08:08.491840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 [2024-07-24 18:08:08.501225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 [2024-07-24 18:08:08.501779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.318 [2024-07-24 18:08:08.501817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.501836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.502075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.502326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.502348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.502364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 Malloc0 00:25:22.318 [2024-07-24 18:08:08.505593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 [2024-07-24 18:08:08.514931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 [2024-07-24 18:08:08.515334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.318 [2024-07-24 18:08:08.515363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577ac0 with addr=10.0.0.2, port=4420 00:25:22.318 [2024-07-24 18:08:08.515386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x577ac0 is same with the state(6) to be set 00:25:22.318 [2024-07-24 18:08:08.515618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577ac0 (9): Bad file descriptor 00:25:22.318 [2024-07-24 18:08:08.515831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.318 [2024-07-24 18:08:08.515851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.318 [2024-07-24 18:08:08.515864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.318 [2024-07-24 18:08:08.519134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.318 [2024-07-24 18:08:08.525270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.318 [2024-07-24 18:08:08.528593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.318 18:08:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2888411 00:25:22.577 [2024-07-24 18:08:08.693577] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:32.547 00:25:32.547 Latency(us) 00:25:32.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.547 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.547 Verification LBA range: start 0x0 length 0x4000 00:25:32.547 Nvme1n1 : 15.01 6647.50 25.97 8880.10 0.00 8218.17 1159.02 19126.80 00:25:32.547 =================================================================================================================== 00:25:32.547 Total : 6647.50 25.97 8880.10 0.00 8218.17 1159.02 19126.80 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.547 rmmod nvme_tcp 00:25:32.547 rmmod nvme_fabrics 00:25:32.547 rmmod nvme_keyring 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2889078 ']' 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2889078 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2889078 ']' 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2889078 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2889078 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2889078' 00:25:32.547 killing process with pid 2889078 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2889078 00:25:32.547 18:08:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2889078 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.547 18:08:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.451 18:08:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.452 00:25:34.452 real 0m23.172s 00:25:34.452 user 0m59.058s 00:25:34.452 sys 0m5.729s 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.452 ************************************ 00:25:34.452 END TEST nvmf_bdevperf 00:25:34.452 ************************************ 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.452 ************************************ 00:25:34.452 START TEST nvmf_target_disconnect 00:25:34.452 ************************************ 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:34.452 * Looking for test storage... 00:25:34.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.452 18:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.351 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:36.352 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:36.352 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:36.352 Found net devices under 0000:09:00.0: cvl_0_0 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:36.352 Found net devices under 0000:09:00.1: cvl_0_1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:25:36.352 00:25:36.352 --- 10.0.0.2 ping statistics --- 00:25:36.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.352 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:36.352 00:25:36.352 --- 10.0.0.1 ping statistics --- 00:25:36.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.352 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.352 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:36.352 ************************************ 00:25:36.352 START TEST nvmf_target_disconnect_tc1 00:25:36.352 ************************************ 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.353 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.353 [2024-07-24 18:08:22.601477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.353 [2024-07-24 18:08:22.601541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248b1a0 with addr=10.0.0.2, port=4420 00:25:36.353 [2024-07-24 18:08:22.601592] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.353 [2024-07-24 18:08:22.601615] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.353 [2024-07-24 18:08:22.601643] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:36.353 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:36.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:36.353 Initializing NVMe Controllers 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:36.353 00:25:36.353 real 0m0.094s 00:25:36.353 user 0m0.047s 00:25:36.353 sys 0m0.047s 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:36.353 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:36.353 ************************************ 00:25:36.353 END TEST nvmf_target_disconnect_tc1 00:25:36.353 ************************************ 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:36.611 ************************************ 00:25:36.611 START TEST nvmf_target_disconnect_tc2 00:25:36.611 ************************************ 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2892226 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2892226 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2892226 ']' 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.611 18:08:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.611 [2024-07-24 18:08:22.717803] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:36.611 [2024-07-24 18:08:22.717881] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.611 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.611 [2024-07-24 18:08:22.783426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.870 [2024-07-24 18:08:22.898739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.870 [2024-07-24 18:08:22.898787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.870 [2024-07-24 18:08:22.898810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.870 [2024-07-24 18:08:22.898822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.870 [2024-07-24 18:08:22.898831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.870 [2024-07-24 18:08:22.898921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:36.870 [2024-07-24 18:08:22.898984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:36.870 [2024-07-24 18:08:22.899049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:36.870 [2024-07-24 18:08:22.899052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 Malloc0 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 [2024-07-24 18:08:23.090210] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 [2024-07-24 18:08:23.118469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2892369 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:36.870 18:08:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.128 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.038 18:08:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2892226 00:25:39.038 18:08:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 [2024-07-24 18:08:25.145174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Read completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.038 Write completed with error (sct=0, sc=8) 00:25:39.038 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 [2024-07-24 18:08:25.145483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 [2024-07-24 18:08:25.145782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Read completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 Write completed with error (sct=0, sc=8) 00:25:39.039 starting I/O failed 00:25:39.039 [2024-07-24 18:08:25.146118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:39.039 [2024-07-24 18:08:25.146312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.146360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.146507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.146537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.146683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.146711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.146830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.146858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.147007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.147034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.147220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.147254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.147386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.039 [2024-07-24 18:08:25.147414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.039 qpair failed and we were unable to recover it. 00:25:39.039 [2024-07-24 18:08:25.147553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.147580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.147720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.147747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.147911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.147939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.148965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.148992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.149170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.149217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.149375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.149403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.149527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.149554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.149729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.149756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.149904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.149932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.150058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.150086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.150257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.150304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.150530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.150594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.150785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.150836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.151024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.151052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.151220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.151249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.151381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.151427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.151616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.151647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.151880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.151910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.152091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.152155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.152287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.152316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.152498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.152528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.152734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.152791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.152939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.152979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.153150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.153184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.153315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.153344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.153632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.153678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.153907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.153967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.154122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.154174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.154311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.040 [2024-07-24 18:08:25.154339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.040 qpair failed and we were unable to recover it. 00:25:39.040 [2024-07-24 18:08:25.154539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.154587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.154748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.154793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.154986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.155169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.155354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.155529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.155791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.155970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.155998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.156179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.156207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.156336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.156374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.156571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.156618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.156765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.156794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.156940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.156969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.157177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.157205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.157338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.157376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.157571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.157618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.157807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.157836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.157990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.158017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.158179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.158207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.158334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.158380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.158571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.158600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.158829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.159049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.159078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.159257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.159299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.159489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.159523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.159709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.159757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.159941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.159993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.160186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.160215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.160348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.160381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.160593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.160623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.160910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.160964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.041 qpair failed and we were unable to recover it. 00:25:39.041 [2024-07-24 18:08:25.161116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.041 [2024-07-24 18:08:25.161174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.161306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.161334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.161482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.161510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.161713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.161744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.161893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.161925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.162080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.162119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.162261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.162288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.162470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.162501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.162686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.162716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.162920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.162950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.163091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.163129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.163280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.163306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.163461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.163492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.163676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.163706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.163937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.163966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.164139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.164193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.164328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.164364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.164516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.164543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.164763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.164959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.164989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.165163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.165195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.165316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.165371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.165529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.165559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.165749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.165778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.165946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.165976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.166150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.166194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.166327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.166354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.166506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.166533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.166691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.166717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.166937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.166967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.167159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.167186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.167306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.167333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.167507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.167534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.167706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.167736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.042 [2024-07-24 18:08:25.168186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.042 [2024-07-24 18:08:25.168219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.042 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.168393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.168420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.168538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.168722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.168749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.168922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.168949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.169073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.169100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.169258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.169285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.169434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.169465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.169637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.169670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.169869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.169900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.170093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.170288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.170477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.170702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.170859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.170982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.171008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.171168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.171212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.171362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.171389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.171569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.171595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.171750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.171985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.172163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.172377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.172570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.172747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.172926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.172953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.173112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.173140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.173303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.043 [2024-07-24 18:08:25.173344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.043 qpair failed and we were unable to recover it. 00:25:39.043 [2024-07-24 18:08:25.173488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.173533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.173746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.173774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.173955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.173986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.174176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.174206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.174353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.174388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.174524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.174552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.174684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.174712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.174905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.174933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.175068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.175097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.175301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.175328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.175509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.175538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.175752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.175800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.175954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.175985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.176148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.176188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.176308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.176335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.176541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.176576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.176745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.176772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.176923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.176949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.177169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.177198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.177338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.177374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.177535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.177562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.177718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.177744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.177868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.177896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.178085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.178123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.178283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.178461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.178489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.178665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.178715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.178889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.178919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.179073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.179100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.179285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.044 [2024-07-24 18:08:25.179314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.044 qpair failed and we were unable to recover it. 00:25:39.044 [2024-07-24 18:08:25.179516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.179545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.179708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.179734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.179929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.179959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.180092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.180132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.180287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.180313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.180475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.180518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.180713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.180759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.180931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.180957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.181086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.181136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.181267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.181295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.181470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.181497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.181692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.181722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.181901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.181927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.182072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.182118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.182293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.182319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.182505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.182535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.182672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.182699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.182851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.182893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.183091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.183128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.183261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.183288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.183423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.183449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.183631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.183854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.183881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.184078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.184120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.184269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.184298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.184470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.184497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.184651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.184678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.184826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.184852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.185026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.185052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.185231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.185261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.185432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.185462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.185603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.185630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.185789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.185830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.186027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.186057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.186221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.186249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.186403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.186430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.186602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.045 [2024-07-24 18:08:25.186629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.045 qpair failed and we were unable to recover it. 00:25:39.045 [2024-07-24 18:08:25.186784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.186810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.187972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.187998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.188130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.188179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.188337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.188379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.188557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.188583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.188791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.188837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.189009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.189038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.189199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.189227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.189340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.189377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.189602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.189629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.189807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.189833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.190035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.190238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.190415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.190615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.190799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.190984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.191157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.191343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.191531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.191692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.191916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.191945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.192122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.192176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.192317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.192343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.192478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.192506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.192683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.192709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.192859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.192889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.193036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.193067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.193246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.193274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.193404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.193431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.193631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.193662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.193837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.046 [2024-07-24 18:08:25.193865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.046 qpair failed and we were unable to recover it. 00:25:39.046 [2024-07-24 18:08:25.194034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.194065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.194243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.194271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.194404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.194431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.194600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.194630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.194829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.194859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.195038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.195066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.195252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.195277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.195437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.195488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.195666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.195691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.195846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.195871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.196961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.196988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.197151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.197201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.197349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.197392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.197544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.197586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.197760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.197788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.197918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.197946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.198965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.198990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.199139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.199173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.199316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.199359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.199523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.199548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.199695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.199719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.199851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.199893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.200121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.200174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.200329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.200354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.200505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.200532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.200727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.200754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.200901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.200929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.201112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.047 [2024-07-24 18:08:25.201150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.047 qpair failed and we were unable to recover it. 00:25:39.047 [2024-07-24 18:08:25.201307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.201334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.201533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.201561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.201728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.201756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.201933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.201958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.202177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.202348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.202529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.202688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.202850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.202980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.203006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.203184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.203227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.203387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.203415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.203570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.203597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.203768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.203811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.203974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.204003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.204198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.204223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.204394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.204424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.204626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.204655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.204838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.204864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.205013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.205041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.205236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.205268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.205453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.205478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.205627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.205669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.205836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.205876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.206061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.206087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.206289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.206316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.206456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.206482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.206649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.206676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.206867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.206897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.207076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.207114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.207255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.207280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.048 [2024-07-24 18:08:25.207406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.048 [2024-07-24 18:08:25.207447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.048 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.207612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.207642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.207788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.207813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.207969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.207994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.208181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.208209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.208383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.208409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.208559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.208585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.208736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.208761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.208976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.209001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.209195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.209223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.209368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.209398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.209597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.209624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.209793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.209823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.210959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.210988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.211161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.211187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.211330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.211366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.211525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.211553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.211710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.211738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.211908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.211940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.212112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.212137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.212290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.212317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.212452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.212628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.212655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.212788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.212816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.213059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.213240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.213418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.213600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.213797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.213994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.214020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.214146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.214172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.214373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.214401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.214582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.049 [2024-07-24 18:08:25.214608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.049 qpair failed and we were unable to recover it. 00:25:39.049 [2024-07-24 18:08:25.214738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.214764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.214961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.214989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.215207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.215236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.215383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.215411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.215554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.215726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.215753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.215887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.215914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.216063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.216253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.216424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.216590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.216816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.216989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.217017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.217221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.217247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.217431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.217461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.217635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.217661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.217809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.217852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.218052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.218081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.218252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.218279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.218433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.218480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.218684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.218711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.218859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.218884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.219076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.219113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.219253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.219281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.219448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.219475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.219650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.219675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.219876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.219904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.220098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.220132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.220264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.220291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.220443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.220474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.220618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.220644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.220790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.220832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.221009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.221037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.221214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.221242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.221413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.221451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.221655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.221692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.221834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.050 [2024-07-24 18:08:25.221860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.050 qpair failed and we were unable to recover it. 00:25:39.050 [2024-07-24 18:08:25.222020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.222045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.222196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.222222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.222386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.222412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.222586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.222614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.222778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.222805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.222978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.223004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.223133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.223187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.223370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.223395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.223548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.223573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.223769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.223797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.223983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.224194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.224372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.224579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.224791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.224961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.224989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.225199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.225224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.225345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.225382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.225533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.225584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.225755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.225793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.225945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.225970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.226127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.226179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.226346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.226384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.226568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.226594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.226796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.226824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.227026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.227060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.227275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.227301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.227456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.227499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.227664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.227694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.227869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.227896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.228011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.228210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.228239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.228417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.228442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.228608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.228636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.228782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.228810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.229005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.229030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.229205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.229230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.229390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.051 [2024-07-24 18:08:25.229434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.051 qpair failed and we were unable to recover it. 00:25:39.051 [2024-07-24 18:08:25.229604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.229628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.229766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.229807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.229985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.230015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.230171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.230198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.230352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.230393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.230573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.230603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.230759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.230786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.230960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.231001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.231170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.231396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.231423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.231552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.231576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.231727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.231764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.231971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.232178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.232337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.232554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.232752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.232936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.232965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.233143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.233170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.233298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.233326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.233503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.233533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.233709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.233735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.233886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.233927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.234092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.234128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.234305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.234331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.234472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.234498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.234654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.234680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.234847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.234872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.235016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.235041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.235215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.235244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.235421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.235448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.235653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.235683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.235852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.235890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.236125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.236155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.236333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.236359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.236540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.236570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.236732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.236759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.052 [2024-07-24 18:08:25.236910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.052 [2024-07-24 18:08:25.236952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.052 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.237160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.237187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.237334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.237368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.237555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.237585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.237772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.237808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.237960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.237987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.238138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.238164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.238319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.238346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.238525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.238550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.238724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.238754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.238921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.238951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.239126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.239154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.239303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.239449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.239475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.239623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.239649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.239798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.239839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.240011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.240042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.240228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.240255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.240404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.240438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.240600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.240626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.240806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.240831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.241007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.241037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.241237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.241268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.241433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.241460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.241590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.241633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.241801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.241831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.242053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.242299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.242447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.242625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.242807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.242993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.053 [2024-07-24 18:08:25.243018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.053 qpair failed and we were unable to recover it. 00:25:39.053 [2024-07-24 18:08:25.243202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.243228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.243347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.243374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.243520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.243545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.243726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.243751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.243899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.243924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.244086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.244136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.244282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.244309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.244463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.244500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.244706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.244736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.244906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.244933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.245136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.245166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.245307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.245342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.245507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.245532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.245706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.245733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.245885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.245927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.246096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.246129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.246261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.246303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.246491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.246517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.246670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.246705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.246873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.246909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.247077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.247114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.247267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.247293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.247459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.247693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.247722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.247936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.247962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.248147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.248176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.248336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.248375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.248544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.248569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.248701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.248743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.248873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.248901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.249110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.249137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.249256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.249432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.249457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.249650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.249677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.249872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.249902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.250030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.250060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.250254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.250281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.054 [2024-07-24 18:08:25.250472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.054 [2024-07-24 18:08:25.250502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.054 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.250661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.250688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.250842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.250868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.251072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.251260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.251438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.251643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.251840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.251993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.252019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.252229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.252260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.252422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.252452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.252596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.252622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.252816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.252845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.253040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.253067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.253252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.253279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.253448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.253492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.253686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.253716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.253885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.253912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.254087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.254151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.254284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.254310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.254465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.254490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.254642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.254688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.254878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.254907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.255066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.255093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.255247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.255274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.255447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.255489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.255655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.255681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.255825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.255867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.256033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.256063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.256237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.256264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.256439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.256483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.256637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.256665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.256857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.256884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.257056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.257085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.257249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.257279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.257491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.257518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.257667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.257697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.055 [2024-07-24 18:08:25.257898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.055 [2024-07-24 18:08:25.257925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.055 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.258075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.258121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.258317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.258346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.258493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.258519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.258695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.258722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.258870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.258904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.259076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.259113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.259294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.259320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.259512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.259542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.259700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.259728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.259918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.260115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.260159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.260290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.260317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.260505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.260532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.260677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.260703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.260891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.260921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.261080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.261113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.261295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.261325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.261523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.261553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.261735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.261762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.261952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.261981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.262149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.262178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.262356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.262383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.262537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.262564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.262714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.262741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.262871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.262898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.263045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.263072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.263292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.263321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.263466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.263493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.263642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.263668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.263821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.263849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.264862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.264887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.056 [2024-07-24 18:08:25.265094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.056 [2024-07-24 18:08:25.265220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.056 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.265419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.265448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.265623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.265657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.265842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.265870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.266020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.266046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.266224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.266269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.266425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.266451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.266641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.266672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.266868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.266898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.267068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.267099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.267298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.267328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.267506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.267536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.267692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.267719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.267873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.267900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.268085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.268121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.268274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.268450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.268495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.268679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.268706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.268856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.268881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.269049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.269079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.269224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.269255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.269451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.269478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.269620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.269648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.269808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.269845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.270047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.270074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.270246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.270274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.270428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.270474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.270611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.270649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.270841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.270882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.271052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.271082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.271234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.271260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.271415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.271442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.271617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.271646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.271843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.271881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.272058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.272087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.272263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.272294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.272494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.272525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.057 [2024-07-24 18:08:25.272667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.057 [2024-07-24 18:08:25.272697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.057 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.272849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.272879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.273072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.273266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.273454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.273645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.273813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.273986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.274015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.274150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.274176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.274364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.274394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.274592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.274619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.274809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.274835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.275029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.275058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.275222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.275266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.275423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.275450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.275645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.275835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.275862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.276949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.276979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.277162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.277189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.277364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.277390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.277570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.277600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.277784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.277810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.277933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.277977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.278127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.278158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.278327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.278363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.058 [2024-07-24 18:08:25.278519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.058 [2024-07-24 18:08:25.278545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.058 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.278667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.278692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.278907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.278933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.279120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.279156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.279312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.279341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.279534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.279561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.279760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.279790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.279985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.280015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.280188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.280215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.280388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.280417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.280617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.280660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.280806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.280832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.280984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.281027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.281215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.281242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.281361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.281387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.281548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.281593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.281783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.281812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.282001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.282028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.282221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.282250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.282420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.282461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.282633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.282669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.282833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.282863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.283024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.283064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.283225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.283253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.283432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.283468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.283663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.283691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.283846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.283873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.284947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.284974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.285128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.285164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.285334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.285363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.285517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.285544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.285708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.059 [2024-07-24 18:08:25.285733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.059 qpair failed and we were unable to recover it. 00:25:39.059 [2024-07-24 18:08:25.285887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.285914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.286074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.286107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.286256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.286286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.286452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.286482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.286677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.286874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.286904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.287123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.287153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.287306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.287333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.287498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.287535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.287713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.287739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.287906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.287933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.288088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.288138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.288334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.288369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.288542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.288569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.288769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.288814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.289010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.289038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.289205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.289232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.289400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.289430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.289608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.289638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.289811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.289838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.290020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.290052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.290247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.290274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.290421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.290458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.290636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.290663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.290833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.290863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.291109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.291274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.291458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.291633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.291776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.291972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.292178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.292336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.292514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.292698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.292861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.292905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.293050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.293080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.060 qpair failed and we were unable to recover it. 00:25:39.060 [2024-07-24 18:08:25.293233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-07-24 18:08:25.293258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.293408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.293570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.293623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.293798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.293824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.293977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.294008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.294211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.294238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.294387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.294414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.294574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.294614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.294781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-07-24 18:08:25.294812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.061 qpair failed and we were unable to recover it. 00:25:39.061 [2024-07-24 18:08:25.294978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.339 [2024-07-24 18:08:25.295005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.295131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.295160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.295340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.295369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.295562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.295597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.295727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.295768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.295961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.295991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.296146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.296180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.296335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.296378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.296526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.296565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.296745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.296772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.296904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.296929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.297892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.297943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.298156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.298183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.298326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.298359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.298481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.298527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.298696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.298725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.298876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.298901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.299026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.299057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.299243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.299273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.299425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.299451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.299622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.299647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.299817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.299847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.300046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.300213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.300409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.300633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.300841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.300990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.301017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.301207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.301234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.301388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.301414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.301602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.301631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.301802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.301828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.301984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.340 [2024-07-24 18:08:25.302011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.340 qpair failed and we were unable to recover it. 00:25:39.340 [2024-07-24 18:08:25.302167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.302195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.302321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.302359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.302524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.302551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.302703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.302746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.302899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.302926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.303053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.303095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.303294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.303321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.303498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.303524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.303644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.303671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.303846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.303889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.304055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.304085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.304261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.304288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.304467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.304496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.304670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.304696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.304863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.304893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.305089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.305285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.305312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.305483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.305514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.305707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.305734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.305896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.305923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.306095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.306146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.306322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.306360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.306536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.306563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.306739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.306766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.306930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.306956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.307126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.307168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.307369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.307398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.307592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.307619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.307793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.307820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.307994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.308195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.308403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.308592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.308758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.308955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.308981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.309161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.309192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.309388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.309418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.309612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.341 [2024-07-24 18:08:25.309638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.341 qpair failed and we were unable to recover it. 00:25:39.341 [2024-07-24 18:08:25.309806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.309836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.310020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.310047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.310200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.310228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.310381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.310408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.310616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.310643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.310821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.310848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.311039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.311069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.311220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.311251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.311434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.311461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.311612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.311639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.311844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.311874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.312068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.312095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.312297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.312327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.312554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.312596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.312790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.312824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.312993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.313022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.313227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.313259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.313471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.313499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.313714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.313741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.313915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.313943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.314096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.314132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.314331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.314364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.314615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.314844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.314871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.314998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.315024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.315176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.315203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.315358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.315385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.315565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.315594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.315834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.315881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.316033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.316060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.316201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.316228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.316398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.316427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.316592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.316619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.316789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.316818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.317034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.317064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.317249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.317276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.317419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.317446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.317595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.342 [2024-07-24 18:08:25.317684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.342 qpair failed and we were unable to recover it. 00:25:39.342 [2024-07-24 18:08:25.317859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.317903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.318095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.318148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.318268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.318295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.318455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.318482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.318685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.318712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.318934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.318987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.319174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.319202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.319383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.319412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.319558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.319590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.319766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.319793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.319945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.319971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.320166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.320208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.320427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.320457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.320644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.320672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.320824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.320852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.321007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.321035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.321240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.321271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.321454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.321482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.321637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.321675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.321812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.321843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.322032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.322059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.322222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.322250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.322428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.322458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.322656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.322684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.322835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.322879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.323113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.323144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.323292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.323320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.323475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.323503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.323739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.323769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.323921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.323948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.324096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.324299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.324326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.324477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.324503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.324715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.324741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.324895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.324921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.325052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.325079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.325266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.325296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.325462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.343 [2024-07-24 18:08:25.325489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.343 qpair failed and we were unable to recover it. 00:25:39.343 [2024-07-24 18:08:25.325612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.325639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.325849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.325877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.326024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.326051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.326188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.326216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.326370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.326397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.326588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.326617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.326855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.326886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.327063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.327090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.327294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.327321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.327448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.327475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.327630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.327660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.327834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.327862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.328033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.328062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.328229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.328256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.328393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.328421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.328612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.328641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.328783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.328813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.329965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.329992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.330151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.330194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.330352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.330386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.330582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.330613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.330775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.330812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.330970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.331008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.331167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.331195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.331356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.331388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.331533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.344 [2024-07-24 18:08:25.331562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.344 qpair failed and we were unable to recover it. 00:25:39.344 [2024-07-24 18:08:25.331826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.331880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.332074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.332268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.332480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.332638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.332818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.332980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.333185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.333341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.333551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.333746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.333965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.333994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.334154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.334181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.334331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.334369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.334524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.334551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.334693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.334723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.334924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.334954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.335122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.335154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.335285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.335312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.335485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.335515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.335761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.335809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.335980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.336010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.336202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.336229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.336402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.336432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.336613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.336660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.336835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.336862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.337091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.337129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.337279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.337305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.337506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.337536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.337751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.337781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.338047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.338077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.338264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.338292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.338470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.338519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.338822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.338877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.339111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.339168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.339303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.339330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.339511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.339538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.339689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.339716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.345 [2024-07-24 18:08:25.339926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.345 [2024-07-24 18:08:25.339955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.345 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.340162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.340190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.340310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.340338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.340514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.340557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.340716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.340746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.340910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.340945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.341155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.341183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.341311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.341337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.341514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.341540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.341693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.341736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.341880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.341909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.342085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.342146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.342322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.342352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.342549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.342580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.342891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.342948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.343110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.343139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.343293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.343321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.343498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.343530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.343727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.343758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.343957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.343987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.344165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.344193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.344324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.344362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.344554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.344596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.344877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.344932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.345109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.345148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.345280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.345307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.345468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.345513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.345701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.345731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.346017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.346047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.346214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.346242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.346377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.346405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.346610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.346640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.346820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.346870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.347063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.347094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.347279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.347307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.347458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.347489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.347787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.347850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.348002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.346 [2024-07-24 18:08:25.348029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.346 qpair failed and we were unable to recover it. 00:25:39.346 [2024-07-24 18:08:25.348199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.348227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.348399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.348430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.348627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.348658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.348847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.348877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.349050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.349081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.349253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.349293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.349454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.349486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.349655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.349690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.349900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.349956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.350171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.350198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.350326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.350353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.350511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.350541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.350744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.350786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.350979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.351009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.351172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.351200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.351347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.351374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.351629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.351684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.351854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.351884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.352036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.352064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.352204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.352232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.352355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.352382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.352571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.352618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.352841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.352896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.353066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.353099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.353307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.353335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.353524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.353555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.353801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.353831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.353997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.354027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.354199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.354226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.354373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.354401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.354581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.354624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.354819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.354848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.355037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.355067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.355228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.355256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.355421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.355453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.355630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.355661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.355885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.355915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.347 [2024-07-24 18:08:25.356113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.347 [2024-07-24 18:08:25.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.347 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.356318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.356345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.356541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.356578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.356721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.356749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.357018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.357064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.357238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.357265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.357419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.357447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.357675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.357736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.358032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.358087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.358274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.358301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.358481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.358511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.358674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.358728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.358923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.358971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.359152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.359180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.359308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.359335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.359490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.359520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.359724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.359759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.359950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.359980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.360123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.360178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.360360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.360404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.360565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.360592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.360846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.360876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.361077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.361113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.361255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.361282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.361456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.361500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.361811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.361863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.362057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.362087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.362242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.362269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.362412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.362443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.362638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.362668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.362856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.362886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.363957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.363987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.364144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.364172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.364326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.348 [2024-07-24 18:08:25.364368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.348 qpair failed and we were unable to recover it. 00:25:39.348 [2024-07-24 18:08:25.364564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.364611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.364862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.364914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.365091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.365125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.365266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.365293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.365460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.365503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.365770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.365826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.365996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.366026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.366197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.366224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.366358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.366408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.366596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.366643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.366805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.366840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.367024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.367053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.367231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.367430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.367477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.367688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.367717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.367886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.367917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.368119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.368156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.368311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.368339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.368477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.368505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.368698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.368729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.368987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.369035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.369209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.369241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.369415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.369443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.369606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.369635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.369780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.369812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.370004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.370035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.370219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.370250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.370425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.370452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.370711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.370766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.370966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.370996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.371174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.371202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.371335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.349 [2024-07-24 18:08:25.371371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.349 qpair failed and we were unable to recover it. 00:25:39.349 [2024-07-24 18:08:25.371546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.371573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.371750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.371780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.371936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.371963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.372117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.372167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.372337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.372373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.372546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.372572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.372749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.372776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.372951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.372981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.373137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.373169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.373335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.373371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.373573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.373600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.373746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.373776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.373953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.373980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.374124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.374171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.374346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.374376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.374521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.374550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.374719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.374745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.374920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.374963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.375146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.375174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.375313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.375342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.375505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.375696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.375726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.375917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.375946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.376098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.376163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.376320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.376347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.376520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.376564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.376907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.376962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.377170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.377212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.377360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.377390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.377535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.377565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.377706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.377733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.377881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.377908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.378114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.378152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.378340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.378370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.378508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.378535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.378690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.378721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.378845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.378873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.379021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.350 [2024-07-24 18:08:25.379048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.350 qpair failed and we were unable to recover it. 00:25:39.350 [2024-07-24 18:08:25.379207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.379235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.379365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.379392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.379567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.379594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.379771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.379802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.379976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.380003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.380183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.380213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.380417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.380444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.380593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.380621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.380793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.380819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.380970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.381000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.381206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.381236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.381428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.381458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.381634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.381661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.381838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.381865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.382064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.382094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.382266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.382293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.382451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.382478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.382647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.382677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.382895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.382942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.383138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.383170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.383339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.383372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.383518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.383548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.383764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.383812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.384935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.384962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.385135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.385165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.385322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.385351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.385496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.385525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.385721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.385748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.385924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.385953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.386122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.386153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.386345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.386375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.386556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.386583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.351 [2024-07-24 18:08:25.386717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.351 [2024-07-24 18:08:25.386744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.351 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.386921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.386949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.387133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.387163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.387314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.387341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.387521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.387693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.387736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.387898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.387928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.388100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.388134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.388300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.388330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.388535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.388562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.388711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.388756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.388924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.388952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.389076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.389317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.389346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.389513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.389543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.389701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.389891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.389920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.390123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.390151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.390304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.390332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.390510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.390537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.390723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.390750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.390881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.390908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.391057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.391087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.391288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.391316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.391484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.391528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.391680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.391707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.391887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.391917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.392083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.392119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.392278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.392309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.392507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.392537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.392693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.392722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.392924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.392951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.393119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.393150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.393317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.393347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.393505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.393532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.393666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.393693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.393858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.393887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.394049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.394079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.394258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.352 [2024-07-24 18:08:25.394290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.352 qpair failed and we were unable to recover it. 00:25:39.352 [2024-07-24 18:08:25.394444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.394472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.394625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.394652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.394829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.394873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.395939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.395966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.396137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.396167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.396357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.396386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.396578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.396608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.396784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.396811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.396964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.396991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.397115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.397142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.397317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.397347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.397541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.397568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.397760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.397790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.397960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.397987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.398122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.398149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.398333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.398360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.398490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.398533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.398701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.398728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.398882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.398909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.399060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.399087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.399209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.399253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.399421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.399451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.399615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.399645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.399841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.399868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.400042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.400072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.400245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.400280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.400463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.400491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.400654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.400681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.353 [2024-07-24 18:08:25.400847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.353 [2024-07-24 18:08:25.400877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.353 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.401947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.401974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.402122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.402149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.402336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.402365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.402529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.402559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.402690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.402720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.402900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.402927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.403082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.403126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.403293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.403322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.403468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.403497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.403684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.403711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.403862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.403888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.404009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.404036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.404240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.404271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.404476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.404503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.404683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.404713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.404872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.405068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.405100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.405288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.405316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.405471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.405503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.405687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.405714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.405893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.405936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.406137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.406164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.406331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.406373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.406550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.406577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.406760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.406789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.406955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.406985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.407163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.407190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.407341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.407368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.407571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.407601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.407830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.407860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.408028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.408059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.408255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.408283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.354 [2024-07-24 18:08:25.408416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.354 [2024-07-24 18:08:25.408442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.354 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.408577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.408604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.408755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.408781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.408933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.409153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.409181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.409332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.409358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.409531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.409558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.409701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.409733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.409922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.409952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.410114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.410141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.410318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.410345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.410500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.410544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.410737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.410767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.410909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.410936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.411096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.411161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.411338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.411365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.411495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.411522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.411698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.411725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.411903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.411930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.412053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.412080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.412243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.412274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.412447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.412473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.412646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.412675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.412839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.412869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.413059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.413090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.413281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.413308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.413440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.413469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.413707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.413765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.413943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.413971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.414140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.414168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.414335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.414365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.414624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.414674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.414874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.414901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.415083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.415131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.415328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.415355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.415651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.415712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.415911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.415940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.416116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.355 [2024-07-24 18:08:25.416144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.355 qpair failed and we were unable to recover it. 00:25:39.355 [2024-07-24 18:08:25.416340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.416369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.416630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.416684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.416882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.416909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.417067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.417094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.417298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.417327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.417651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.417706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.417880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.417910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.418084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.418118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.418289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.418320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.418581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.418633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.418806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.419938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.419987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.420135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.420166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.420368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.420396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.420549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.420576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.420781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.420810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.420980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.421010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.421157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.421202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.421354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.421381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.421552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.421582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.421831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.421888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.422057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.422265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.422292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.422491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.422521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.422789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.422819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.423029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.423209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.423393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.423595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.423801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.423977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.424004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.424174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.424205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.424395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.424424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.356 [2024-07-24 18:08:25.424581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.356 [2024-07-24 18:08:25.424611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.356 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.424781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.424808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.424935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.424962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.425084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.425117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.425278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.425308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.425485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.425512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.425713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.425743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.425878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.425909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.426074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.426111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.426270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.426470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.426497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.426668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.426698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.426865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.426895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.427086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.427130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.427314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.427341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.427509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.427538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.427703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.427733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.427906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.427934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.428090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.428142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.428309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.428344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.428537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.428567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.428715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.428743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.428893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.428938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.429131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.429162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.429294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.429325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.429494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.429521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.429680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.429710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.429907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.429934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.430111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.430138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.430290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.430317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.430463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.430490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.430622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.430649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.430824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.430855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.431056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.431084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.431270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.431300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.431467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.431498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.431685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.431715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.431866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.431893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.432066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.357 [2024-07-24 18:08:25.432096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.357 qpair failed and we were unable to recover it. 00:25:39.357 [2024-07-24 18:08:25.432277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.432304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.432457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.432485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.432642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.432669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.432818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.432845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.432978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.433005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.433200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.433227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.433377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.433404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.433557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.433588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.433763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.433808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.433972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.434153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.434339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.434557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.434765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.434950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.434976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.435092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.435125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.435303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.435330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.435500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.435530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.435700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.435727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.435900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.435929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.436112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.436335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.436365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.436537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.436564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.436685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.436727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.436895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.436925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.437115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.437145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.437306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.437333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.437487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.437514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.437664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.437691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.437891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.437920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.438052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.438078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.438236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.438264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.438434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.438464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.358 qpair failed and we were unable to recover it. 00:25:39.358 [2024-07-24 18:08:25.438642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.358 [2024-07-24 18:08:25.438669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.438850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.438877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.439044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.439074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.439228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.439260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.439450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.439480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.439651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.439678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.439848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.439877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.440018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.440048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.440242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.440269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.440421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.440622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.440652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.440817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.440846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.441009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.441038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.441185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.441212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.441345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.441371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.441648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.441715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.441909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.441938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.442115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.442143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.442291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.442319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.442473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.442499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.442668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.442697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.442873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.442900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.443028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.443068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.443243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.443273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.443479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.443505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.443634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.443661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.443857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.443887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.444048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.444077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.444242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.444272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.444452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.444479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.444609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.444653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.444918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.444972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.445163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.445194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.445367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.445394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.445566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.445596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.445761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.445935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.445966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.446168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.359 [2024-07-24 18:08:25.446196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.359 qpair failed and we were unable to recover it. 00:25:39.359 [2024-07-24 18:08:25.446377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.446405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.446561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.446628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.446799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.446827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.447048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.447219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.447413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.447612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.447802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.447990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.448018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.448169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.448345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.448373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.448539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.448564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.448762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.448790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.448992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.449019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.449196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.449226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.449405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.449432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.449582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.449624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.449832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.449890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.450952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.450981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.451131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.451173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.451303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.451329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.451493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.451521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.451698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.451724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.451869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.451894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.452069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.452249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.452424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.452608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.452828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.452997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.453026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.453198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.453224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.453377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.453402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.360 [2024-07-24 18:08:25.453552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.360 [2024-07-24 18:08:25.453593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.360 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.453770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.453795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.453946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.453971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.454123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.454166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.454359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.454384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.454572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.454600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.454772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.454797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.454915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.454958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.455155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.455191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.455325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.455354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.455501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.455527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.455644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.455671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.455801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.455829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.456002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.456032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.456216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.456244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.456394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.456437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.456665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.456695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.456889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.456917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.457093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.457470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.457625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.457825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.457982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.458145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.458392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.458571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.458748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.458931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.458961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.459127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.459171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.459326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.459353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.459485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.459512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.459683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.459708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.459859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.459887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.460066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.460092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.460251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.460279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.460485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.460547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.460725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.460750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.361 qpair failed and we were unable to recover it. 00:25:39.361 [2024-07-24 18:08:25.460872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.361 [2024-07-24 18:08:25.460898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.461027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.461068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.461210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.461240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.461413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.461441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.461614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.461640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.461835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.461865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.462929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.462957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.463122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.463148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.463275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.463300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.463509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.463575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.463737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.463765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.463961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.463986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.464150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.464343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.464371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.464505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.464533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.464708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.464733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.464866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.464892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.465099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.465141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.465315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.465346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.465496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.465523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.465705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.465748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.465917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.465945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.466093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.466130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.466301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.466493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.466521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.466760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.466822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.466992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.467220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.467246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.467397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.467426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.467662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.467716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.467896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.467921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.468073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.468100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.468252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.362 [2024-07-24 18:08:25.468290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.362 qpair failed and we were unable to recover it. 00:25:39.362 [2024-07-24 18:08:25.468550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.468610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.468779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.468809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.468999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.469028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.469206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.469232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.469466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.469525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.469670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.469697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.469900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.470973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.470999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.471179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.471209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.471424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.471452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.471593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.471633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.471836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.471866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.472032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.472069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.472263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.472288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.472464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.472491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.472673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.472700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.472844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.472870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.473055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.473080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.473212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.473238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.363 [2024-07-24 18:08:25.473363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.363 [2024-07-24 18:08:25.473388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.363 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.473536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.473726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.473751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.473903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.473946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.474121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.474281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.474322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.474455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.474490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.474687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.474717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.474884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.474913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.475131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.475159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.475294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.475320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.475467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.475492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.475642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.475667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.475802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.475830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.476063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.476094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.476298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.476325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.476494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.476522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.476698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.476732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.476907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.476934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.477134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.477165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.477343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.477373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.477543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.477573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.477719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.477746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.477900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.477926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.478080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.478319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.478346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.478490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.478515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.478668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.478709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.478903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.478932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.479097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.479132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.479304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.479331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.479517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.479547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.479703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.479731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.479885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.479913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.480071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.480096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.480260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.480287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.480444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.480473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.480634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.480664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.364 [2024-07-24 18:08:25.480805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.364 [2024-07-24 18:08:25.480832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.364 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.481004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.481180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.481222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.481417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.481447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.481617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.481642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.481847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.481876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.482065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.482099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.482277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.482305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.482454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.482478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.482626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.482672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.482844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.482874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.483037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.483066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.483251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.483279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.483478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.483508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.483772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.483827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.483992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.484021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.484185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.484211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.484370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.484412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.484706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.484776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.484954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.484980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.485139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.485165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.485338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.485367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.485506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.485534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.485705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.485733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.485885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.485923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.486113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.486270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.486449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.486600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.486835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.486980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.487008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.487187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.487213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.487362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.487388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.487577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.487607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.487863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.487890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.488067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.488094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.488326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.488354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.488512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.365 [2024-07-24 18:08:25.488539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.365 qpair failed and we were unable to recover it. 00:25:39.365 [2024-07-24 18:08:25.488692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.488735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.488910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.488940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.489150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.489177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.489362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.489389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.489536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.489562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.489717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.489742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.489893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.489918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.490091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.490145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.490336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.490366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.490526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.490560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.490730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.490757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.490926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.490956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.491120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.491149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.491348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.491376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.491529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.491556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.491683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.491710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.491842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.491869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.492024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.492051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.492233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.492260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.492401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.492431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.492728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.492787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.492952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.492981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.493148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.493174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.493345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.493375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.493544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.493573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.493706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.493734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.493909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.493936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.494085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.494118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.494279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.494321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.494449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.494479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.494680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.494707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.494842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.494869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.495069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.495297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.495474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.495627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.495837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.495979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.366 [2024-07-24 18:08:25.496008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.366 qpair failed and we were unable to recover it. 00:25:39.366 [2024-07-24 18:08:25.496215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.496242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.496433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.496463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.496689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.496719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.496922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.496949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.497110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.497138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.497316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.497347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.497652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.497726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.497919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.497948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.498120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.498146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.498278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.498303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.498457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.498499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.498665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.498821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.498846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.499072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.499277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.499482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.499688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.499860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.499998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.500028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.500183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.500209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.500385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.500410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.500582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.500610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.500776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.500805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.500995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.501025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.501206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.501240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.501436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.501465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.501640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.501667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.501821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.501847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.502035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.502062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.502268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.502298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.502553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.502583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.502773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.502803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.502979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.503169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.503370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.503543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.503721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.367 [2024-07-24 18:08:25.503898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.367 [2024-07-24 18:08:25.503943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.367 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.504082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.504118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.504281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.504313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.504488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.504516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.504693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.504721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.504912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.504952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.505125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.505164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.505316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.505343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.505492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.505519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.505665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.505692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.505904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.505934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.506090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.506122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.506273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.506299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.506459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.506484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.506639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.506682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.506827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.506852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.507053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.507228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.507419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.507584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.507773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.507976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.508189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.508399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.508579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.508761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.508965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.508994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.509149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.509327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.509352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.509502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.509705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.509736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.509904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.509929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.510061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.510087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.510221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.368 [2024-07-24 18:08:25.510246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.368 qpair failed and we were unable to recover it. 00:25:39.368 [2024-07-24 18:08:25.510394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.510419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.510575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.510600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.510740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.510769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.510940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.510970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.511138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.511168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.511320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.511348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.511523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.511553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.511711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.511753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.511907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.511931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.512120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.512148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.512311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.512341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.512594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.512646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.512831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.512861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.513941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.513967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.514144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.514172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.514327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.514358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.514559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.514586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.514699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.514742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.514894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.515069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.515097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.515280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.515307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.515435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.515476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.515657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.515684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.515829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.515854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.516032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.516215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.516414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.516633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.516840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.516978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.517006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.517148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.517175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.517356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.517402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.517586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.369 [2024-07-24 18:08:25.517613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.369 qpair failed and we were unable to recover it. 00:25:39.369 [2024-07-24 18:08:25.517752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.517780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.517951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.517980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.518118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.518148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.518330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.518357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.518528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.518557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.518802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.518832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.519023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.519052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.519217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.519254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.519389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.519428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.519586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.519615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.519808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.519838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.520882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.520908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.521109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.521139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.521303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.521333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.521511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.521662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.521689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.521830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.521857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.522913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.522941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.523107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.523159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.523309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.523335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.523530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.523560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.523760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.523787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.523981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.524010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.524252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.524282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.524454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.524484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.524659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.524685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.524851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.524880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.370 [2024-07-24 18:08:25.525083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.370 [2024-07-24 18:08:25.525116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.370 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.525262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.525287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.525417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.525443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.525567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.525592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.525744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.525769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.525940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.525972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.526166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.526192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.526368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.526397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.526545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.526574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.526765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.526794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.526994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.527159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.527340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.527492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.527659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.527802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.527827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.528949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.528977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.529168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.529350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.529392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.529554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.529583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.529716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.529744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.529896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.529924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.530067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.530100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.530319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.530348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.530501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.530714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.530741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.530953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.530980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.531137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.531182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.531321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.531350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.531513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.531689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.531717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.531872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.531899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.532113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.532140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.371 qpair failed and we were unable to recover it. 00:25:39.371 [2024-07-24 18:08:25.532286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.371 [2024-07-24 18:08:25.532311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.532503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.532532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.532769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.532825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.532999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.533026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.533173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.533199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.533373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.533401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.533661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.533896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.533925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.534969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.534995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.535167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.535197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.535346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.535376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.535574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.535600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.535746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.535774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.535976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.536006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.536179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.536209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.536399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.536426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.536619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.536648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.536812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.536841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.537029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.537058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.537217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.537243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.537439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.537468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.537733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.537787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.537954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.537983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.538184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.538211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.538342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.538369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.538519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.538561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.538754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.538784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.538982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.539012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.372 [2024-07-24 18:08:25.539146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.372 [2024-07-24 18:08:25.539173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.372 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.539317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.539343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.539515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.539558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.539698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.539724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.539898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.539942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.540133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.540163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.540332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.540361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.540529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.540555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.540675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.540700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.540913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.540939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.541090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.541125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.541284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.541311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.541468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.541495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.541677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.541704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.541861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.541888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.542042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.542069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.542257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.542478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.542507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.542666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.542695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.542860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.542887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.543124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.543299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.543474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.543638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.543793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.543993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.544232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.544383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.544531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.544706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.544912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.544941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.545143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.545170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.545345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.545374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.545544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.545574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.545739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.545768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.545916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.545942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.546088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.546148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.546319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.546348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.373 qpair failed and we were unable to recover it. 00:25:39.373 [2024-07-24 18:08:25.546522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.373 [2024-07-24 18:08:25.546552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.546725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.546752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.546899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.546936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.547974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.547999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.548175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.548203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.548379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.548408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.548579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.548609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.548767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.548793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.548969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.548996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.549147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.549176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.549344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.549375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.549551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.549760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.549786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.549960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.549989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.550165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.550365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.550547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.550700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.550844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.550995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.551022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.551214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.551241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.551408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.551437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.551641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.551705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.551874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.551903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.552056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.552087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.552284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.552315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.552546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.552572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.552747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.552925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.552963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.553100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.553151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.553343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.553373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.553507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.553537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.553679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.553706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.374 qpair failed and we were unable to recover it. 00:25:39.374 [2024-07-24 18:08:25.553856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.374 [2024-07-24 18:08:25.553883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.554014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.554040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.554266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.554296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.554468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.554495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.554620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.554645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.554826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.554856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.555049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.555236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.555388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.555567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.555806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.555974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.556160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.556339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.556547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.556690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.556857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.556882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.557077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.557114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.557277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.557452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.557479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.557655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.557681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.557830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.557873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.558085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.558117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.558270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.558297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.558439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.558468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.558635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.558665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.558825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.558855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.559028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.559055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.559187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.559213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.559408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.559612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.559642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.559812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.559839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.560043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.560237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.560465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.560661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.560859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.560991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.561021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.375 [2024-07-24 18:08:25.561296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.375 [2024-07-24 18:08:25.561324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.375 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.561532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.561558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.561681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.561708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.561872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.561914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.562095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.562134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.562293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.562320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.562471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.562497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.562648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.562679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.562848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.562877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.563049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.563075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.563211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.563239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.563483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.563537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.563725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.563755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.563926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.563955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.564159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.564187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.564314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.564340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.564511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.564540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.564686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.564713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.564865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.564907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.565042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7a230 is same with the state(6) to be set 00:25:39.376 [2024-07-24 18:08:25.565294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.565335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.565498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.565527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.565660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.565842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.565869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.565996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.566170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.566354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.566566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.566766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.566936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.566967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.567141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.567169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.567305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.567332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.567481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.567509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.567660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.567687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.567853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.567880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.568055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.568083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.568261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.568289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.568409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.568437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.568561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.376 [2024-07-24 18:08:25.568588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.376 qpair failed and we were unable to recover it. 00:25:39.376 [2024-07-24 18:08:25.568741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.568767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.568895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.568921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.569098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.569135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.569286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.569314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.569490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.569520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.569690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.569717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.569892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.569920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.570085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.570124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.570275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.570303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.570461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.570488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.570677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.570705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.570992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.571049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.571209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.571237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.571386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.571430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.571576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.571607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.571803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.571830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.571974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.572165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.572541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.572731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.572909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.572936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.573116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.573146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.573333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.573378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.573534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.573564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.573756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.573787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.573977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.574172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.574369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.574552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.574755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.574906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.574951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.377 [2024-07-24 18:08:25.575124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.377 [2024-07-24 18:08:25.575157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.377 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.575339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.575366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.575556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.575586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.575883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.575952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.576160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.576187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.576391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.576418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.576565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.576592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.576783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.576810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.576965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.577009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.577187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.577217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.577357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.577384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.577592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.577621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.577894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.577947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.578098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.578133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.578265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.578309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.578480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.578510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.578709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.578735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.578874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.578903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.579075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.579113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.579290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.579316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.579472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.579499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.579699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.579728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.579926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.579952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.580099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.580158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.580312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.580339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.580465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.580493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.580642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.580669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.580821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.580864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.581879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.581908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.582078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.582111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.582241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.582267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.582413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.582439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.582619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.582646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.378 [2024-07-24 18:08:25.582833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.378 [2024-07-24 18:08:25.582863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.378 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.583025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.583055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.583236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.583264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.583435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.583464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.583632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.583663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.583866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.583893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.584048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.584236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.584441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.584645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.584817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.584979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.585009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.585195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.585222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.585414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.585443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.585610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.585636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.585801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.585832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.586893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.586922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.587108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.587135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.587260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.587287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.587439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.587482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.587622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.587651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.379 [2024-07-24 18:08:25.587820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.379 [2024-07-24 18:08:25.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.379 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.588850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.588884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.589849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.589875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.590958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.590985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.591130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.591160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.591349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.591376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.591527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.591553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.591730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.591757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.591883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.591910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.592133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.592294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.592444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.592640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.592836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.592980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.664 [2024-07-24 18:08:25.593009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.664 qpair failed and we were unable to recover it. 00:25:39.664 [2024-07-24 18:08:25.593194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.593222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.593373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.593403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.593572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.593602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.593754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.593782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.593948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.593978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.594155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.594183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.594313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.594340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.594508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.594705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.594735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.594899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.594929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.595145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.595325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.595480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.595635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.595826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.595977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.596130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.596356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.596533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.596724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.596907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.596937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.597112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.597140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.597265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.597292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.597474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.597503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.597641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.597667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.597814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.597840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.598922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.598967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.599120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.599147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.599281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.599308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.599454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.599496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.599642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.599671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.599810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.665 [2024-07-24 18:08:25.599837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.665 qpair failed and we were unable to recover it. 00:25:39.665 [2024-07-24 18:08:25.600011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.600212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.600358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.600525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.600732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.600932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.600958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.601088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.601120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.601267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.601310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.601484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.601510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.601708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.601738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.601918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.601945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.602100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.602145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.602318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.602348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.602549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.602576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.602730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.602757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.602894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.602924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.603095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.603135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.603308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.603335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.603465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.603491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.603668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.603695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.603847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.603874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.604030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.604056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.604222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.604254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.604410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.604441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.604594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.604621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.604808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.604837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.605881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.605911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.606100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.606281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.606434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.606583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.666 [2024-07-24 18:08:25.606775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.666 qpair failed and we were unable to recover it. 00:25:39.666 [2024-07-24 18:08:25.606928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.606954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.607072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.607099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.607256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.607300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.607444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.607473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.607630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.607657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.607775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.607802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.608040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.608227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.608379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.608582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.608815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.608990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.609020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.609204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.609234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.609397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.609429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.609556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.609600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.609786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.609814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.609989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.610148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.610329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.610542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.610764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.610934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.610964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.611138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.611165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.611313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.611355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.611523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.611552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.611716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.611743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.611947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.611977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.612145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.612173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.612331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.612358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.612506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.612536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.612678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.612708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.612878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.612906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.613072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.613279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.613457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.613636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.613812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.667 [2024-07-24 18:08:25.613984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.667 [2024-07-24 18:08:25.614011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.667 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.614199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.614229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.614365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.614395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.614539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.614566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.614686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.614713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.614901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.614927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.615081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.615114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.615308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.615338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.615525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.615559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.615707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.615745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.615944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.615974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.616131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.616163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.616344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.616376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.616556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.616586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.616757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.616909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.616935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.617056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.617083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.617273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.617308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.617481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.617508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.617679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.617709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.617861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.617889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.618931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.618958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.619114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.619140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.619268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.619294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.619422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.619449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.619601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.619628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.668 qpair failed and we were unable to recover it. 00:25:39.668 [2024-07-24 18:08:25.619812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.668 [2024-07-24 18:08:25.619839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.619971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.619999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.620137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.620178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.620320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.620349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.620515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.620543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.620692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.620719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.620850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.620878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.621005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.621033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.621211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.621239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.621411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.621457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.621613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.621657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.621837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.621867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.622009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.622036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.622217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.622269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.622515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.622567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.622774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.622819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.622978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.623006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.623217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.623262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.623445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.623477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.623630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.623660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.623825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.623855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.624018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.624047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.624254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.624281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.624617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.624682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.624867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.624898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.625068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.625095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.625262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.625290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.625437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.625482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.625661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.625708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.625858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.625885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.626007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.626036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.626192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.626219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.626399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.626444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.626621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.626666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.626800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.626827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.627001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.669 [2024-07-24 18:08:25.627028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.669 qpair failed and we were unable to recover it. 00:25:39.669 [2024-07-24 18:08:25.627204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.627252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.627417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.627462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.627695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.627748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.627906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.627934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.628093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.628146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.628321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.628365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.628567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.628765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.628793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.628945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.628973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.629150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.629182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.629379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.629424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.629613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.629641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.629787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.629815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.629992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.630019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.630175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.630221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.630424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.630474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.630656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.630704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.630868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.630901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.631053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.631081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.631250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.631299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.631473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.631518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.631664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.631709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.631836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.631864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.632009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.632037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.632204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.632250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.632400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.632446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.632650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.632695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.632854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.632881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.633028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.633056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.633226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.633270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.633474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.633520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.633729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.633928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.633955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.634116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.634146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.634351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.634382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.670 [2024-07-24 18:08:25.634564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.670 [2024-07-24 18:08:25.634609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.670 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.634800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.634828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.634981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.635012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.635171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.635217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.635396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.635441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.635647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.635693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.635841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.635868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.636004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.636032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.636162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.636189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.636409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.636454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.636655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.636687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.636875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.636905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.637073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.637108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.637263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.637290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.637487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.637517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.637732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.637763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.637941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.637971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.638135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.638179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.638332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.638362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.638504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.638533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.638702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.638732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.638899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.638929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.639095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.639150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.639337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.639364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.639642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.639695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.639887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.639917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.640074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.640110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.640268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.640295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.640497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.640527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.640694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.640724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.640983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.641041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.641197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.641225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.641395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.641425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.641580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.641608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.641786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.641816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.642006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.642036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.642207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.642239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.642415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.671 [2024-07-24 18:08:25.642443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.671 qpair failed and we were unable to recover it. 00:25:39.671 [2024-07-24 18:08:25.642617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.642646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.642814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.642844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.643950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.643980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.644194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.644221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.644354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.644382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.644593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.644649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.644848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.644877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.645052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.645082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.645245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.645274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.645446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.645475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.645616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.645646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.645796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.645839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.646956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.646985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.647177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.647205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.647354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.647396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.647568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.647602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.647803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.647833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.647980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.648159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.648339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.648522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.648725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.648922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.648953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.649115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.649159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.649309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.649506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.649536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.649726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.672 [2024-07-24 18:08:25.649756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.672 qpair failed and we were unable to recover it. 00:25:39.672 [2024-07-24 18:08:25.649915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.649946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.650159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.650298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.650325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.650486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.650516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.650707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.650737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.650893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.650935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.651143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.651171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.651309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.651338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.651516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.651546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.651696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.651724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.651854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.651881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.652062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.652092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.652287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.652315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.652468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.652514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.652690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.652717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.652868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.653064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.653091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.653276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.653306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.653461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.653488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.653664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.653892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.653922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.654113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.654282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.654459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.654649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.654862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.654999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.655029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.655200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.655227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.655400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.655431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.655591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.655626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.655795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.655822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.655975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.656002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.656176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.673 [2024-07-24 18:08:25.656203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.673 qpair failed and we were unable to recover it. 00:25:39.673 [2024-07-24 18:08:25.656323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.656350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.656545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.656575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.656743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.656772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.656967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.656997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.657176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.657204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.657348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.657375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.657562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.657589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.657771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.657801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.657963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.657994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.658163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.658191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.658350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.658378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.658506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.658533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.658656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.658683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.658854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.658880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.659085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.659271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.659465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.659624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.659819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.659971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.660015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.660179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.660210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.660349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.660376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.660558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.660588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.660781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.660812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.660988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.661164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.661372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.661550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.661711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.661939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.661969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.662140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.662167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.662362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.662392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.662520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.662549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.662684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.662712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.662861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.662904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.663045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.663075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.663258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.663286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.674 [2024-07-24 18:08:25.663479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.674 [2024-07-24 18:08:25.663524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.674 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.663700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.663732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.663907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.663934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.664078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.664117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.664276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.664304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.664438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.664465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.664664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.664939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.664967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.665149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.665176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.665323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.665364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.665510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.665540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.665715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.665742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.665938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.665968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.666168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.666201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.666333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.666361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.666512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.666539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.666714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.666744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.666897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.666924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.667079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.667113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.667273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.667300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.667453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.667480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.667615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.667644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.667823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.667850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.668949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.668978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.669162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.669190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.669328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.669357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.669585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.669615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.669766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.669793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.669992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.670022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.670223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.670251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.670398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.670425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.670577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.670605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.675 qpair failed and we were unable to recover it. 00:25:39.675 [2024-07-24 18:08:25.670773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.675 [2024-07-24 18:08:25.670803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.670977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.671004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.671183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.671211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.671369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.671415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.671596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.671623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.671770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.671797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.672962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.672992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.673158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.673189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.673362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.673389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.673557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.673587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.673732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.673762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.673941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.673968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.674154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.674185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.674362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.674389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.674524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.674551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.674746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.674776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.674906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.674937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.675137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.675165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.675336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.675366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.675531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.675561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.675735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.675762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.675935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.676139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.676169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.676317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.676344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.676463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.676490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.676620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.676647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.676826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.676853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.677002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.677033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.677211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.677239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.677416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.677444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.677621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.677652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.677857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.677901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.678115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.676 [2024-07-24 18:08:25.678159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.676 qpair failed and we were unable to recover it. 00:25:39.676 [2024-07-24 18:08:25.678339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.678366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.678507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.678537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.678714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.678741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.678908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.678938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.679082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.679117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.679290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.679462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.679492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.679664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.679694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.679892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.679919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.680120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.680150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.680320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.680349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.680523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.680551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.680724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.680789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.680958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.680988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.681190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.681217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.681395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.681422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.681545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.681573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.681762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.681789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.681914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.681961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.682163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.682194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.682337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.682524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.682569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.682736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.682918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.682946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.683131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.683327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.683356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.683529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.683556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.683734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.683760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.683888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.683917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.684136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.684321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.684527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.684687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.684863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.684985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.685013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.685164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.685193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.685321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.685349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.677 [2024-07-24 18:08:25.685553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.677 [2024-07-24 18:08:25.685583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.677 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.685775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.685802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.685973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.686189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.686331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.686514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.686727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.686940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.686967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.687099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.687137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.687332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.687361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.687516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.687543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.687706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.687736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.687908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.687938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.688110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.688137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.688332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.688361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.688566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.688593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.688748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.688776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.688926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.688953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.689122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.689150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.689270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.689297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.689446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.689473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.689649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.689676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.689862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.689890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.690090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.690303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.690460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.690644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.690824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.690980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.691181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.691373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.691549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.691731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.678 [2024-07-24 18:08:25.691959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.678 [2024-07-24 18:08:25.691986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.678 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.692162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.692371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.692401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.692539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.692571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.692736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.692764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.692924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.692954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.693125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.693156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.693337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.693365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.693556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.693586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.693791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.693818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.694027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.694228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.694393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.694623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.694821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.694987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.695022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.695232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.695260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.695431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.695461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.695651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.695681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.695881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.696085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.696122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.696312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.696342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.696511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.696539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.696736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.696766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.696947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.696974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.697152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.697179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.697332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.697362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.697529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.697559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.697764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.697790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.697969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.698135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.698340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.698513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.698706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.698913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.698940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.699128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.699172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.699346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.699374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.699584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.679 [2024-07-24 18:08:25.699610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.679 qpair failed and we were unable to recover it. 00:25:39.679 [2024-07-24 18:08:25.699745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.699771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.699944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.699972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.700129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.700157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.700311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.700338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.700468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.700495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.700649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.700676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.700876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.700906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.701058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.701085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.701247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.701275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.701451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.701480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.701675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.701705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.701854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.701882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.702035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.702081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.702260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.702290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.702475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.702501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.702658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.702685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.702862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.702892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.703074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.703114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.703254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.703283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.703439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.703467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.703592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.703620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.703816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.703846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.704010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.704040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.704192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.704220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.704395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.704437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.704625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.704654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.704852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.704879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.705111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.705319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.705679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.705830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.705987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.706200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.706354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.706570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.706739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.680 [2024-07-24 18:08:25.706901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.680 [2024-07-24 18:08:25.706927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.680 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.707119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.707147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.707299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.707345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.707538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.707568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.707746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.707772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.707968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.707996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.708205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.708233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.708365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.708393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.708544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.708577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.708772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.708804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.708972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.709174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.709369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.709574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.709798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.709969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.709999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.710202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.710230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.710373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.710403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.710559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.710590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.710789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.710817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.710971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.711003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.711196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.711227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.711426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.711453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.711624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.711651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.711848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.711878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.712065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.712092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.712280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.712310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.712502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.712529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.712708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.712735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.712929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.712960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.713126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.713157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.713322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.713350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.713521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.713552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.713749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.713776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.713983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.714013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.714158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.714185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.714341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.714368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.714543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.681 [2024-07-24 18:08:25.714571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.681 qpair failed and we were unable to recover it. 00:25:39.681 [2024-07-24 18:08:25.714698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.714727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.714874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.714905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.715079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.715262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.715293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.715481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.715511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.715647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.715674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.715853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.715883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.716049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.716078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.716274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.716503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.716533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.716725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.716755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.716929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.717150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.717181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.717372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.717401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.717579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.717607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.717763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.717986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.718190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.718377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.718554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.718753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.718936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.718963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.719116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.719147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.719302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.719502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.719532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.719698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.719728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.719922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.719952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.720114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.720160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.720318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.720345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.720472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.720500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.720672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.720717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.720850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.720880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.721044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.721071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.721231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.721258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.682 qpair failed and we were unable to recover it. 00:25:39.682 [2024-07-24 18:08:25.721428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.682 [2024-07-24 18:08:25.721458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.721600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.721627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.721826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.721856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.722958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.722985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.723146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.723192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.723345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.723372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.723540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.723571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.723715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.723960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.723991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.724171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.724198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.724360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.724386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.724605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.724632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.724810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.724840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.724992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.725199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.725383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.725532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.725709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.725910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.725939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.726098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.726138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.726284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.726312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.726462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.726504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.726640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.726670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.726868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.726899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.727115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.727142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.727293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.727320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.727447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.727474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.727653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.727679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.727859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.727890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.728067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.728098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.728283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.728313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.683 [2024-07-24 18:08:25.728486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.683 [2024-07-24 18:08:25.728515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.683 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.728714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.728740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.728911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.729116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.729143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.729298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.729325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.729495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.729524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.729673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.729704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.729904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.729931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.730091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.730126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.730324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.730353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.730548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.730574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.730717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.730918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.730948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.731131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.731178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.731354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.731381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.731529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.731573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.731748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.731775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.731933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.731961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.732129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.732159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.732331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.732359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.732530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.732560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.732723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.732754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.732923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.732951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.733115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.733145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.733316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.733345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.733510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.733538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.733688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.733715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.733893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.733936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.734114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.734141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.734294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.734338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.734503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.734532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.734715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.734742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.734917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.734947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.735117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.735143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.735291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.735319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.735442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.735490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.735656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.735686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.735836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.684 [2024-07-24 18:08:25.735863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.684 qpair failed and we were unable to recover it. 00:25:39.684 [2024-07-24 18:08:25.736014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.736042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.736227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.736256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.736450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.736477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.736646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.736675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.736851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.736881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.737073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.737110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.737278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.737306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.737485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.737514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.737672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.737699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.737865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.737892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.738041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.738082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.738270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.738297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.738473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.738499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.738676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.738706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.738876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.738903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.739054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.739082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.739268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.739294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.739446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.739474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.739623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.739650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.739825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.739869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.740944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.740975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.741150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.741178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.741299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.741327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.741512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.741542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.741737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.741763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.741884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.741911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.742059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.742086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.742314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.742341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.742493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.742521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.742709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.742739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.742915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.742941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.743115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.685 [2024-07-24 18:08:25.743160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.685 qpair failed and we were unable to recover it. 00:25:39.685 [2024-07-24 18:08:25.743318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.743345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.743473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.743499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.743613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.743639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.743784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.743811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.743986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.744187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.744358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.744558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.744750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.744933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.744979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.745157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.745185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.745314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.745359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.745529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.745559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.745717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.745899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.745926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.746132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.746362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.746389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.746583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.746612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.746795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.746822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.746969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.746996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.747193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.747223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.747387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.747416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.747561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.747589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.747747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.747774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.747906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.747933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.748123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.748150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.748297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.748329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.748511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.748538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.748659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.748686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.748833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.748877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.749076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.749262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.749462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.749642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.749844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.749996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.750023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.750199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.686 [2024-07-24 18:08:25.750228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.686 qpair failed and we were unable to recover it. 00:25:39.686 [2024-07-24 18:08:25.750377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.750408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.750560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.750586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.750760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.750805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.750981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.751158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.751360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.751596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.751785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.751957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.751988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.752186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.752213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.752374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.752404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.752565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.752595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.752763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.752790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.752918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.752945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.753159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.753189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.753343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.753370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.753493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.753520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.753729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.753758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.753932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.753959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.754156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.754187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.754357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.754387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.754564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.754592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.754791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.754821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.755959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.755990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.756138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.756169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.756349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.756376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.756567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.756598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.687 qpair failed and we were unable to recover it. 00:25:39.687 [2024-07-24 18:08:25.756731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.687 [2024-07-24 18:08:25.756761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.756950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.756981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.757169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.757196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.757372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.757417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.757584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.757611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.757743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.757770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.757926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.757953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.758133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.758172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.758356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.758387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.758565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.758610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.758779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.758806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.759031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.759260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.759446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.759670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.759882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.759998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.760179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.760336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.760515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.760708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.760927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.760957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.761143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.761303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.761481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.761690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.761869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.761998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.762025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.762176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.762204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.762379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.762409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.762615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.762642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.762822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.762849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.763020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.763050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.763232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.763260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.763439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.763466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.763671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.763701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.763870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.763902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.764055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.688 [2024-07-24 18:08:25.764082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.688 qpair failed and we were unable to recover it. 00:25:39.688 [2024-07-24 18:08:25.764296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.764327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.764516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.764546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.764744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.764771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.764970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.765171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.765374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.765548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.765733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.765915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.765945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.766143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.766186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.766335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.766366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.766531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.766558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.766735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.766779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.766959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.766986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.767170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.767198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.767373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.767403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.767575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.767603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.767749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.767776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.767941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.767971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.768140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.768171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.768351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.768378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.768528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.768555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.768706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.768734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.768886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.768914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.769117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.769148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.769310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.689 [2024-07-24 18:08:25.769339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.689 qpair failed and we were unable to recover it. 00:25:39.689 [2024-07-24 18:08:25.769541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.769568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.769706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.769736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.769905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.769934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.770131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.770158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.770340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.770369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.770549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.770577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.770706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.770733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.770928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.770958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.771128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.771173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.771350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.771377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.771548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.771578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.771775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.771806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.771972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.771999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.772147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.772191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.772374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.772400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.772548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.772575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.772774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.772804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.772969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.772999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.773162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.773190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.773387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.773416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.773593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.773623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.773825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.773852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.774044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.774074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.774286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.774315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.774495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.774523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.774704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.774733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.774920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.774946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.775110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.775138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.775311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.775341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.775503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.775532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.775708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.775736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.775867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.775894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.776109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.776140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.690 qpair failed and we were unable to recover it. 00:25:39.690 [2024-07-24 18:08:25.776292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.690 [2024-07-24 18:08:25.776327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.776521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.776551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.776714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.776744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.776915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.776945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.777116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.777161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.777316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.777343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.777467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.777494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.777659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.777688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.777852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.777883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.778065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.778256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.778433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.778804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.778997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.779027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.779199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.779227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.779412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.779455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.779617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.779644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.779818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.779850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.779974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.780186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.780365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.780536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.780767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.780967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.780994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.781191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.781221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.781384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.781413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.781587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.781613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.781766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.781793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.781935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.781963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.691 [2024-07-24 18:08:25.782115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.691 [2024-07-24 18:08:25.782142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.691 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.782342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.782371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.782557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.782587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.782756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.782784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.782954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.782985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.783183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.783214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.783389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.783416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.783613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.783643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.783835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.783864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.784038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.784065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.784192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.784219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.784372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.784399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.784613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.784640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.784829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.784858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.785006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.785037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.785221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.785249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.785378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.785407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.785557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.785601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.785806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.785834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.786006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.786035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.786205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.786236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.786375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.786403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.786600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.786630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.786844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.786871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.787050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.787077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.787239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.787266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.787468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.787498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.787675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.787702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.787866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.787903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.788087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.788122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.692 [2024-07-24 18:08:25.788279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.692 [2024-07-24 18:08:25.788307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.692 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.788472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.788653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.788684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.788880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.788907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.789082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.789122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.789266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.789294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.789423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.789450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.789639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.789666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.789851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.789881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.790051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.790079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.790266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.790295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.790462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.790492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.790677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.790705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.790831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.790858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.791952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.792154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.792185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.792358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.792386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.792539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.792583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.792787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.792814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.793020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.793050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.793234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.793262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.793430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.793459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.793632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.693 [2024-07-24 18:08:25.793659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.693 qpair failed and we were unable to recover it. 00:25:39.693 [2024-07-24 18:08:25.793804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.793831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.793986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.794012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.794164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.794192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.794359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.794553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.794583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.794757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.794784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.794986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.795156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.795361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.795560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.795775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.795968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.795998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.796171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.796198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.796360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.796404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.796557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.796585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.796731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.796758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.796946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.796976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.797154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.797182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.797298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.797342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.797508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.797539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.797717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.797744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.797895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.797922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.798112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.798140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.798317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.798344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.798535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.798562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.798742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.798785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.798958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.798986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.799160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.799191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.799360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.799390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.799542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.799569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.799726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.799754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.799939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.799969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.800144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.800172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.800328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.800355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.800474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.800502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.800652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.800680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.800853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.800884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.801924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.801954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.802160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.802188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.802332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.802358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.802517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.802546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.802678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.802708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.802854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.802882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.803079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.803118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.803254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.803281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.803427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.803458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.803634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.803664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.803838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.803868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.804059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.804089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.694 [2024-07-24 18:08:25.804273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.694 [2024-07-24 18:08:25.804301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.694 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.804484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.804514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.804654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.804681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.804804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.804831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.804982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.805188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.805365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.805614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.805795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.805959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.805990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.806152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.806180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.806326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.806353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.806527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.806557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.806723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.806753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.806916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.806946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.807107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.807153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.807307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.807335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.807487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.807515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.807655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.807686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.807864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.807907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.808085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.808117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.808294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.808322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.808491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.808521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.808689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.808716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.808888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.808918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.809089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.809129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.809336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.809364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.809502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.809532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.809713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.809741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.809915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.809942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.810117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.810148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.810359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.810386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.810535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.810562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.810679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.810706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.810879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.810909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.695 qpair failed and we were unable to recover it. 00:25:39.695 [2024-07-24 18:08:25.811080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.695 [2024-07-24 18:08:25.811124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.811283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.811314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.811442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.811470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.811615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.811642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.811793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.811837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.812872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.812899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.813054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.813082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.813237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.813264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.813471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.813502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.813677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.813705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.813886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.813913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.814087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.814124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.814324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.814352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.814478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.814505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.814656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.814698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.814858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.814889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.815941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.815968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.816141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.816179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.816359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.816390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.816525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.816556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.696 [2024-07-24 18:08:25.816730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.696 [2024-07-24 18:08:25.816757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.696 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.816920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.816951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.817134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.817162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.817285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.817312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.817458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.817485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.817663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.817693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.817866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.817893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.818073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.818277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.818470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.818639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.818816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.818980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.819134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.819356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.819539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.819744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.819889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.819915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.820043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.820069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.820265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.820292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.820455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.820481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.820637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.820664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.697 qpair failed and we were unable to recover it. 00:25:39.697 [2024-07-24 18:08:25.820807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.697 [2024-07-24 18:08:25.820835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.820993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.821162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.821321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.821503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.821725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.821899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.821929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.822088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.822126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.822279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.822306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.822459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.822504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.822656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.822683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.822841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.822868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.823098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.823304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.823484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.823675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.823846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.823999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.824180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.824383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.824572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.824750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.824922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.824967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.698 qpair failed and we were unable to recover it. 00:25:39.698 [2024-07-24 18:08:25.825146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.698 [2024-07-24 18:08:25.825173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.825332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.825372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.825559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.825588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.825769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.825796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.825926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.825953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.826113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.826139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.826286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.826318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.826470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.826496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.826639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.826666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.826878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.826904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.827069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.827098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.827253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.827282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.827469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.827496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.827666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.827894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.827921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.828079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.828311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.828468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.828615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.828812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.828993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.829023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.829207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.829235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.829381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.829409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.829595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.829625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.829826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.829852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.830023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.830052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.830218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.830247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.699 [2024-07-24 18:08:25.830413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.699 [2024-07-24 18:08:25.830440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.699 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.830607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.830637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.830829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.830859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.831041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.831067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.831229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.831256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.831383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.831411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.831606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.831646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.831806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.831851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.832003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.832031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.832164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.832193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.832364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.832392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.832591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.832636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.832834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.832880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.833028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.833056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.833208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.833235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.833406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.833453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.833589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.833637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.833806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.833855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.834029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.834056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.834276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.834328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.834475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.834521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.834695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.834741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.834866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.834893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.835031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.835059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.835235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.835280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.835486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.835530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.835698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.835743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.700 qpair failed and we were unable to recover it. 00:25:39.700 [2024-07-24 18:08:25.835881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.700 [2024-07-24 18:08:25.835909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.836064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.836091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.836301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.836347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.836498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.836542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.836761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.836815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.836965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.836992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.837147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.837178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.837404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.837449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.837659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.837703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.837852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.837879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.838029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.838056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.838246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.838292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.838461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.838505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.838702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.838757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.838900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.838927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.839058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.839085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.839269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.839314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.839526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.839570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.839790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.839842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.840023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.840050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.840227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.840272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.840479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.840524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.840795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.840849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.841009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.841036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.841213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.841258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.841404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.841449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.841619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.841663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.841818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.841845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.842004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.701 [2024-07-24 18:08:25.842031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.701 qpair failed and we were unable to recover it. 00:25:39.701 [2024-07-24 18:08:25.842175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.842220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.842369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.842415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.842596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.842650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.842799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.842831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.842981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.843178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.843378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.843583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.843749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.843903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.843930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.844085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.844118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.844307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.844367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.844569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.844612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.844853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.844904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.845054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.845081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.845288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.845332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.845645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.845702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.845842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.845870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.846000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.846028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.846229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.846275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.846453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.846485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.846632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.846662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.846833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.846865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.847033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.847063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.847284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.847316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.847525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.847556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.847710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.847741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.847894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.847925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.848121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.848156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.848289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.702 [2024-07-24 18:08:25.848316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.702 qpair failed and we were unable to recover it. 00:25:39.702 [2024-07-24 18:08:25.848546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.848591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.848797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.848828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.849018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.849048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.849205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.849364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.849392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.849563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.849593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.849840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.849883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.850022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.850050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.850212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.850240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.850369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.850396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.850579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.850631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.850820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.850850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.851045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.851075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.851267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.851308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.851465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.851493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.851659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.851703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.852022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.852083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.852253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.852280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.852482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.852526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.852765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.852819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.852984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.853011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.853226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.853255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.853423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.853468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.853635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.853680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.853916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.854086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.854131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.854326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.854366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.854555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.854582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.854843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.854896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.703 [2024-07-24 18:08:25.855030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.703 [2024-07-24 18:08:25.855057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.703 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.855240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.855272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.855473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.855517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.855666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.855693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.855815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.855841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.856008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.856035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.856218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.856262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.856470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.856514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.856719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.856780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.856943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.856970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.857165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.857210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.857396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.857445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.857730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.857775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.857955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.857982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.858143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.858170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.858374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.858674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.858902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.704 [2024-07-24 18:08:25.858929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.704 qpair failed and we were unable to recover it. 00:25:39.704 [2024-07-24 18:08:25.859086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.859264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.859469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.859514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.859683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.859727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.859967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.860019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.860188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.860233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.860358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.860386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.860589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.860632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.860905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.860966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.861095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.861146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.861324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.861378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.861548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.861592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.861725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.861752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.861929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.862112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.862142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.862304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.862334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.862529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.862575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.862732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.862759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.862934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.862961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.863113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.863152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.863323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.863373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.863576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.863607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.863899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.863962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.864155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.864183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.864308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.864334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.864576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.864632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.864845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.864874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.865055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.865082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.705 qpair failed and we were unable to recover it. 00:25:39.705 [2024-07-24 18:08:25.865288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.705 [2024-07-24 18:08:25.865330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.865572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.865628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.865777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.865808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.866006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.866037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.866202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.866229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.866380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.866407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.866567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.866594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.866844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.866897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.867068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.867095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.867232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.867259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.867410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.867452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.867624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.867670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.867942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.867986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.868154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.868183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.868365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.868410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.868596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.868624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.868923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.868977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.869154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.869185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.869403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.869447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.869738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.869791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.869970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.869998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.870195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.870242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.870427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.870471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.870673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.870717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.870880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.870907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.871039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.871066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.871273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.871512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.871557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.706 qpair failed and we were unable to recover it. 00:25:39.706 [2024-07-24 18:08:25.871836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.706 [2024-07-24 18:08:25.871892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.872064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.872331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.872363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.872623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.872675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.872850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.872885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.873075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.873111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.873270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.873297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.873471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.873516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.873664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.873710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.873998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.874058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.874274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.874319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.874514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.874559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.874850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.874911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.875093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.875150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.875353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.875383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.875601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.875667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.875832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.875862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.876062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.876091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.876292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.876320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.876498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.876543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.876771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.876825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.876984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.877011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.877164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.877192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.877367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.877411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.877696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.877747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.877928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.877955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.878123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.878162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.878338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.707 [2024-07-24 18:08:25.878568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.707 [2024-07-24 18:08:25.878611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.707 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.878740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.878769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.878925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.878952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.879100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.879166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.879367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.879398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.879544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.879574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.879758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.879827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.880056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.880083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.880230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.880257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.880434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.880464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.880610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.880654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.880843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.880873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.881048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.881074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.881243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.881271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.881448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.881477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.881735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.881786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.881980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.882015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.882201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.882229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.882360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.882387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.882534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.882561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.882767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.882796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.882985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.883015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.883195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.883223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.883392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.883422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.883650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.883680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.883874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.884093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.884296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.884323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.708 qpair failed and we were unable to recover it. 00:25:39.708 [2024-07-24 18:08:25.884513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.708 [2024-07-24 18:08:25.884558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.884727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.884759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.885086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.885165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.885300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.885326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.885514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.885544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.885734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.885786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.885983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.886012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.886185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.886213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.886366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.886676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.886730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.886958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.886987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.887172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.887199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.887331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.887357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.887565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.887594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.887917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.887973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.888171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.888202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.888400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.888429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.888602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.888629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.888868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.888919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.889061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.889091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.889249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.889276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.889401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.889445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.889593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.889622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.889809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.889838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.890026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.890239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.890266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.890408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.890434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.890596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.890656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.890790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.890819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.891020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.891050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.891212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.891252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.891430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.709 [2024-07-24 18:08:25.891462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.709 qpair failed and we were unable to recover it. 00:25:39.709 [2024-07-24 18:08:25.891658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.891685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.891839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.891871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.892923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.892952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.893121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.893166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.893292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.893319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.893470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.893502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.893653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.893680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.893857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.893884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.894037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.894285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.894312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.894455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.894481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.894758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.894811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.710 qpair failed and we were unable to recover it. 00:25:39.710 [2024-07-24 18:08:25.895002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.710 [2024-07-24 18:08:25.895031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.895178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.895205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.895334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.895361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.895550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.895576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.895749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.895775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.895904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.895931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.896093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.896128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.896278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.896305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.896475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.896504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.896681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.896708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.896861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.896888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.897064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.897093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.897279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.897306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.897457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.897484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.897627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.897654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.897845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.897875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.898966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.898995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.899176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.899204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.899324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.899350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.899501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.899532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.899733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.899760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.899948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.899976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.900140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.900167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.900282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.711 [2024-07-24 18:08:25.900309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.711 qpair failed and we were unable to recover it. 00:25:39.711 [2024-07-24 18:08:25.900436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.900463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.900684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.900713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.900885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.900912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.901078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.901115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.901291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.901329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.901461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.901488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.901637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.901664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.901860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.901889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.902914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.902958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.903126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.903154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.903329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.903356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.903551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.903578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.903720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.903747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.903924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.903951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.904097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.904128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.904275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.904302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.904422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.904467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.904633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.904663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.904833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.904860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.905943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.712 [2024-07-24 18:08:25.905970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.712 qpair failed and we were unable to recover it. 00:25:39.712 [2024-07-24 18:08:25.906120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.906147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.906305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.906333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.906490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.906516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.906690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.906720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.906888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.906917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.907092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.907146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.907313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.907356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.907542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.907573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.907750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.907777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.907950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.907988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.908135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.908167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.908346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.908374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.908520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.908550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.908733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.908760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.908939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.908966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.909147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.909188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.909410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.909439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.909579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.909606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.909752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.909781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.909906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.909951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.910161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.910189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.910385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.910415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.910585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.910614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.910786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.910823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.911013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.911054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:39.713 [2024-07-24 18:08:25.911277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.713 [2024-07-24 18:08:25.911309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:39.713 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.911478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.911505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.911629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.911656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.911790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.911818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.911949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.911976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.912132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.912185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.912385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.912424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.912603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.912640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.004 [2024-07-24 18:08:25.912832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.004 [2024-07-24 18:08:25.912871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.004 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.913059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.913097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.913324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.913360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.913562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.913604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.913764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.913805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.914008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.914044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.914231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.914271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.914437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.914475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.914686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.914726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.914872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.914907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.915042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.915078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.915329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.915357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.915549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.915580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.915750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.915779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.915929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.915956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.916086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.916121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.916302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.916328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.916494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.916529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.916714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.916744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.916933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.916964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.917142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.917169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.917287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.917314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.917503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.917533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.917671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.917698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.917826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.917853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.918029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.918058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.918262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.918289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.918434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.918477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.918643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.918672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.918850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.918876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.919040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.919069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.919236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.919266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.919468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.919495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.919646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.919673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.919821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.919847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.920010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.920037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.920172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.920210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.920373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.005 [2024-07-24 18:08:25.920417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.005 qpair failed and we were unable to recover it. 00:25:40.005 [2024-07-24 18:08:25.920589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.920616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.920784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.920814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.920991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.921018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.921169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.921197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.921348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.921393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.921558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.921587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.921779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.921806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.921975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.922166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.922343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.922518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.922746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.922971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.923150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.923194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.923336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.923366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.923536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.923562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.923756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.923785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.923951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.923981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.924130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.924157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.924351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.924380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.924515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.924546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.924715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.924742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.924942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.924971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.925132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.925163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.925363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.925390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.925545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.925571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.925762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.925791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.925932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.925958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.926124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.926151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.926323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.926353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.926546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.926573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.926742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.926772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.926947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.926974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.927125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.927153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.927349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.927378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.927548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.927577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.006 [2024-07-24 18:08:25.927750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.006 [2024-07-24 18:08:25.927776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.006 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.927925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.927968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.928139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.928169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.928307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.928333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.928528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.928557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.928694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.928723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.928898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.928924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.929126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.929156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.929332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.929359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.929479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.929505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.929679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.929706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.929877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.929907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.930079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.930269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.930449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.930652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.930830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.930989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.931018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.931183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.931210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.931415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.931444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.931618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.931647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.931810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.931837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.931964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.932008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.932205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.932232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.932409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.932436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.932634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.932663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.932851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.932881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.933074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.933107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.933310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.933339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.933531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.933561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.933702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.933728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.933877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.933920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.934062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.934093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.934275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.934302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.934495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.934524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.934719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.934749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.934913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.934940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.935096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.007 [2024-07-24 18:08:25.935129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.007 qpair failed and we were unable to recover it. 00:25:40.007 [2024-07-24 18:08:25.935278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.935305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.935435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.935462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.935653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.935682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.935853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.935883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.936058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.936084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.936264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.936294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.936489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.936519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.936665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.936692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.936859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.936887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.937053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.937083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.937264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.937290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.937461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.937492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.937629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.937659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.937838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.937864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.938024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.938053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.938217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.938248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.938414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.938445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.938617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.938646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.938790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.938816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.939965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.939992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.940166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.940193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.940388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.940417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.940604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.940634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.940830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.940857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.940997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.941027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.941161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.941191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.941370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.941396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.941556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.941582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.941750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.941780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.941976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.942002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.942166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.942196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.942344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.008 [2024-07-24 18:08:25.942373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.008 qpair failed and we were unable to recover it. 00:25:40.008 [2024-07-24 18:08:25.942544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.942571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.942721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.942763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.942935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.942964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.943118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.943146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.943337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.943529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.943558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.943760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.943788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.943932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.943962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.944088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.944124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.944307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.944334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.944487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.944513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.944693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.944721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.009 [2024-07-24 18:08:25.944866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.009 [2024-07-24 18:08:25.944893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.009 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.945088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.945125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.945276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.945302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.945477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.945504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.945629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.945655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.945806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.945833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.946038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.946218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.946358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.946568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.946778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.946980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.947009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.947198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.947225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.947391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.947421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.947596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.947624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.947804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.947831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.948021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.948047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.948197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.948224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.948391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.948418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.948594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.948637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.948799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.948828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.949030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.949191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.949380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.949604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.949782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.949977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.950006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.950147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.950174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.950348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.950375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.950530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.950574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.950748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.010 [2024-07-24 18:08:25.950775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.010 qpair failed and we were unable to recover it. 00:25:40.010 [2024-07-24 18:08:25.950939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.950968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.951142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.951173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.951343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.951370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.951505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.951549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.951709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.951738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.951879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.951906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.952054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.952097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.952293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.952323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.952493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.952520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.952696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.952726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.952863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.952894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.953067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.953093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.953246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.953276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.953454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.953481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.953662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.953851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.953880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.954972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.954998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.955151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.955179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.955335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.955362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.955530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.955560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.955752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.955781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.955928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.955955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.956147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.956177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.956349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.956379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.011 [2024-07-24 18:08:25.956552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.011 [2024-07-24 18:08:25.956579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.011 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.956777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.956806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.956971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.957161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.957365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.957559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.957757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.957951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.957981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.958161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.958188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.958312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.958339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.958498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.958525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.958648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.958676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.958892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.958919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.959075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.959109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.959295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.959325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.959521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.959547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.959696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.959726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.959870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.959901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.960063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.960090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.960262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.960293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.960455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.960654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.960681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.960812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.960839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.961050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.961255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.961452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.961616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.961819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.961992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.962022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.962181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.962211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.962375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.962402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.962573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.962604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.962773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.962802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.963005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.963032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.963206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.963236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.963439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.963468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.963664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.963690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.963834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.012 [2024-07-24 18:08:25.963863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.012 qpair failed and we were unable to recover it. 00:25:40.012 [2024-07-24 18:08:25.964058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.964087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.964244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.964270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.964471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.964501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.964665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.964694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.964862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.964889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.965045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.965072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.965225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.965252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.965410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.965437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.965595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.965622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.965793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.965823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.966049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.966262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.966427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.966624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.966809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.966968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.967012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.967189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.967216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.967393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.967422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.967613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.967642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.967814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.967841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.967974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.968129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.968310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.968466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.968617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.968801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.968828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.969948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.969974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.970111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.970139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.970275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.970302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.970419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.970446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.970642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.970672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.970860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.013 [2024-07-24 18:08:25.970889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.013 qpair failed and we were unable to recover it. 00:25:40.013 [2024-07-24 18:08:25.971053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.971244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.971406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.971613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.971768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.971951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.971978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.972169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.972197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.972374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.972400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.972597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.972623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.972774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.972800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.972992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.973022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.973176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.973205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.973346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.973373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.973571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.973600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.973790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.973819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.973988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.974015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.974171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.974217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.974382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.974411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.974576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.974603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.974777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.974806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.975969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.975998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.976199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.976227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.976403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.976430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.976576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.976765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.976795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.976935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.976962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.977115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.977143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.977321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.977355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.977500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.977527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.014 qpair failed and we were unable to recover it. 00:25:40.014 [2024-07-24 18:08:25.977684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.014 [2024-07-24 18:08:25.977730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.977902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.977932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.978123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.978150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.978316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.978346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.978536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.978565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.978772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.978798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.978971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.979168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.979353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.979543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.979729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.979936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.979963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.980120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.980166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.980335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.980364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.980510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.980537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.980717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.980761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.980897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.980926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.981174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.981201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.981351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.981393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.981537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.981567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.981762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.981789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.981939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.981966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.982162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.982192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.982365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.982391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.982519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.982545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.982718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.982759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.982963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.982992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.983188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.983220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.983357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.983387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.983549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.983576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.983702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.983730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.983856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.983884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.984046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.984084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.984314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.984355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.984515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.015 [2024-07-24 18:08:25.984543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.015 qpair failed and we were unable to recover it. 00:25:40.015 [2024-07-24 18:08:25.984729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.984756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.984883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.984926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.985132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.985300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.985452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.985651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.985836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.985992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.986019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.986138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.986166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.986347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.986374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.986524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.986553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.986816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.986877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.987964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.987993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.988188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.988215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.988367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.988410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.988591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.988618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.988749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.988776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.988905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.988932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.989077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.989109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.989299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.989325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.989516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.989545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.989750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.989777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.989929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.989956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.990121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.990181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.990343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.990372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.990526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.990553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.990822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.990876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.991040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.991071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.991264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.991462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.991493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.991766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.991821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.991990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.992019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.992175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.992204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.016 qpair failed and we were unable to recover it. 00:25:40.016 [2024-07-24 18:08:25.992347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.016 [2024-07-24 18:08:25.992390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.992539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.992566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.992742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.992786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.993053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.993113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.993266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.993293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.993469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.993496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.993677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.993707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.993883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.993909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.994080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.994118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.994285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.994470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.994496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.994642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.994685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.994924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.994979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.995152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.995180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.995334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.995475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.995502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.995678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.995704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.995920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.995978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.996194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.996221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.996354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.996380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.996556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.996583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.996749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.996779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.996950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.996976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.997128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.997155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.997308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.997335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.997479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.997505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.997632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.997659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.997805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.997832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.998864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.998898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.999132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.999176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.999331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.999358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.999531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.999561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.999757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.999784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.017 [2024-07-24 18:08:25.999944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.017 [2024-07-24 18:08:25.999974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.017 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.000139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.000182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.000331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.000358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.000528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.000557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.000743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.000773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.000951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.000978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.001124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.001167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.001344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.001373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.001527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.001554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.001708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.001751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.001915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.001944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.002138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.002164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.002313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.002343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.002531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.002560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.002733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.002760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.002953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.002982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.003115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.003146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.003322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.003349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.003469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.003513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.003709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.003739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.003929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.003956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.004091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.004126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.004315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.004344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.004523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.004550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.004703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.004730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.004853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.004880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.005034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.005061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.005214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.005241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.005394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.005438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.005610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.005637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.005877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.005933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.006075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.006110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.006256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.006283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.006411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.006438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.006645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.006675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.006874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.006901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.007082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.007140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.007353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.007385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.007551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.018 [2024-07-24 18:08:26.007580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.018 qpair failed and we were unable to recover it. 00:25:40.018 [2024-07-24 18:08:26.007743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.007790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.007962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.007993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.008176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.008205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.008362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.008389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.008558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.008589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.008766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.008794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.008919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.008947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.009074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.009106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.009272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.009298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.009439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.009470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.009617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.009646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.009850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.009876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.010108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.010283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.010492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.010673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.010852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.010974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.011145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.011371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.011516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.011693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.011872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.011899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.012970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.012997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.013156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.013184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.013354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.013383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.013555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.013581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.013753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.019 [2024-07-24 18:08:26.013780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.019 qpair failed and we were unable to recover it. 00:25:40.019 [2024-07-24 18:08:26.013955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.013984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.014144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.014171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.014315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.014342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.014560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.014617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.014766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.014793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.014949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.014976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.015142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.015173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.015310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.015341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.015514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.015666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.015692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.015870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.015897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.016064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.016091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.016224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.016266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.016460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.016489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.016669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.016696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.016842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.016872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.017065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.017094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.017248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.017275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.017468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.017502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.017721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.017748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.017920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.017946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.018138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.018169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.018342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.018368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.018517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.018544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.018711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.018740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.018901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.018930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.019108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.019135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.019328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.019357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.019494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.019524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.019719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.019746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.019874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.019902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.020060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.020124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.020282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.020309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.020456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.020482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.020691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.020717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.020865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.020891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.021086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.021124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.021329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.020 [2024-07-24 18:08:26.021356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.020 qpair failed and we were unable to recover it. 00:25:40.020 [2024-07-24 18:08:26.021536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.021562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.021711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.021741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.021886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.021916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.022115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.022141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.022313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.022342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.022541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.022567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.022715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.022741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.022889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.022932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.023065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.023094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.023303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.023498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.023528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.023688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.023717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.023888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.023915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.024044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.024071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.024306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.024334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.024458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.024485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.024662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.024688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.024862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.024891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.025065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.025092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.025278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.025322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.025472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.025499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.025674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.025705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.025833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.026040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.026215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.026425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.026619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.026843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.026987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.027189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.027347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.027525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.027703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.027922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.028077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.028109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.028265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.028308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.028482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.028509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.021 qpair failed and we were unable to recover it. 00:25:40.021 [2024-07-24 18:08:26.028633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.021 [2024-07-24 18:08:26.028660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.028805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.028832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.029936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.029966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.030109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.030136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.030285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.030313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.030521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.030549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.030726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.030757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.030935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.030965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.031131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.031161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.031364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.031590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.031620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.031756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.031786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.031939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.031966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.032091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.032123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.032303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.032347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.032524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.032551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.032695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.032722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.032880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.032909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.033111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.033257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.033283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.033489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.033519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.033716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.033743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.033937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.033966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.034131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.034161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.034361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.034388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.034563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.034592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.034783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.035014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.035041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.035206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.035237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.035427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.035457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.035623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.035649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.035818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.035847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.036013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.036042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.022 qpair failed and we were unable to recover it. 00:25:40.022 [2024-07-24 18:08:26.036217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.022 [2024-07-24 18:08:26.036244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.036375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.036402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.036550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.036577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.036761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.036787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.036966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.037008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.037205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.037232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.037385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.037412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.037606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.037635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.037793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.037823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.037977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.038152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.038334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.038531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.038754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.038940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.038974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.039130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.039158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.039349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.039378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.039520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.039550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.039726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.039753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.039947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.039976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.040166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.040196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.040363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.040389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.040515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.040541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.040754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.040783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.040958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.040984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.041137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.041183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.041330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.041360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.041552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.041579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.041731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.041757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.041912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.041955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.042154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.042330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.042510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.042672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.042871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.042997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.043027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.043190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.043217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.043371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.023 [2024-07-24 18:08:26.043414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.023 qpair failed and we were unable to recover it. 00:25:40.023 [2024-07-24 18:08:26.043571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.043601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.043809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.043835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.044005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.044036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.044208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.044238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.044440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.044467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.044634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.044664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.044835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.044862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.045036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.045237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.045420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.045620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.045813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.045987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.046189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.046372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.046568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.046776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.046966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.046995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.047169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.047196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.047345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.047372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.047549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.047579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.047714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.047743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.047917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.047944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.048970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.049192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.049223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.049358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.049387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.049560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.049587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.024 [2024-07-24 18:08:26.049779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.024 [2024-07-24 18:08:26.049809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.024 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.049943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.049972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.050150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.050177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.050340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.050370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.050538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.050567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.050745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.050772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.050916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.050943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.051118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.051163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.051338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.051364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.051504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.051533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.051690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.051718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.051902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.051929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.052105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.052140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.052308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.052339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.052486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.052513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.052638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.052664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.052816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.052842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.053022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.053048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.053218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.053248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.053415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.053445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.053609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.053637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.053786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.053829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.054952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.054982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.055174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.055204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.055346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.055373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.055501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.055528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.055726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.055756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.055942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.055969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.056132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.056162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.056327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.056356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.056553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.056580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.056711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.056738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.056856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.025 [2024-07-24 18:08:26.056883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.025 qpair failed and we were unable to recover it. 00:25:40.025 [2024-07-24 18:08:26.057099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.057261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.057417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.057561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.057764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.057951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.057981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.058223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.058250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.058431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.058458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.058638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.058667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.058835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.058861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.059048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.059239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.059416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.059610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.059824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.059997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.060185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.060340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.060492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.060659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.060816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.060843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.061019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.061046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.061194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.061224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.061434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.061461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.061576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.061603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.061789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.061818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.062035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.062239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.062415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.062628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.062829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.062991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.063179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.063374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.063555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.063704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.063914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.063943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.064128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.064155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.026 qpair failed and we were unable to recover it. 00:25:40.026 [2024-07-24 18:08:26.064301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.026 [2024-07-24 18:08:26.064328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.064458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.064484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.064662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.064689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.064860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.064889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.065048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.065079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.065221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.065248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.065452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.065481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.065648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.065675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.065846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.065875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.066054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.066285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.066312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.066467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.066510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.066698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.066728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.066932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.066958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.067972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.067999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.068196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.068226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.068402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.068429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.068581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.068608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.068733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.068910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.068937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.069060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.069087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.069258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.069288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.069469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.069495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.069660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.069687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.069832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.069859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.070054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.070083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.070286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.070313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.070492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.070522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.070694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.070724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.070893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.070919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.071076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.071107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.071237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.071264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.071442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.071469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.027 qpair failed and we were unable to recover it. 00:25:40.027 [2024-07-24 18:08:26.071685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.027 [2024-07-24 18:08:26.071712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.071856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.071899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.072064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.072091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.072235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.072262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.072405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.072434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.072601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.072628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.072821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.072851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.073007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.073042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.073214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.073242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.073440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.073469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.073662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.073876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.073903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.074074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.074108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.074302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.074331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.074495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.074522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.074736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.074765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.074929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.074959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.075967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.075997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.076152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.076179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.076305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.076332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.076481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.076508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.076682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.076709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.076855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.076886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.077096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.077147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.077331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.077357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.077529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.077558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.077729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.077758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.077935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.077962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.028 [2024-07-24 18:08:26.078080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.028 [2024-07-24 18:08:26.078114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.028 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.078285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.078320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.078497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.078524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.078714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.078743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.078903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.078934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.079115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.079143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.079316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.079346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.079487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.079517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.079694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.079720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.079870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.079897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.080096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.080144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.080318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.080345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.080496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.080522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.080714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.080743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.080914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.080941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.081118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.081149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.081318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.081347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.081547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.081574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.081749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.081778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.081922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.081952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.082123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.082151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.082319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.082348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.082539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.082569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.082785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.082811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.083011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.083040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.083231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.083261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.083411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.083438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.083608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.083637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.029 [2024-07-24 18:08:26.083824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.029 [2024-07-24 18:08:26.083854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.029 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.083995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.084170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.084404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.084579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.084759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.084947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.084977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.085151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.085178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.085333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.085360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.085538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.085568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.085708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.085734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.085887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.085914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.086068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.086097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.086279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.086306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.086440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.086471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.086664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.086693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.086861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.086887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.087972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.087998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.088158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.088188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.088348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.088374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.088550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.088576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.088745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.088775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.088961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.088988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.089171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.089198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.089375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.089405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.089541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.089570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.089766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.089792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.089986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.090016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.090158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.090188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.090358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.090385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.090553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.090582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.030 [2024-07-24 18:08:26.090775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.030 [2024-07-24 18:08:26.090805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.030 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.090954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.090983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.091168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.091195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.091325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.091352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.091544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.091571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.091736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.091770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.091920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.091950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.092136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.092163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.092361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.092391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.092575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.092601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.092749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.092776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.092971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.093166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.093352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.093503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.093703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.093923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.093949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.094097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.094129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.094282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.094308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.094507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.094534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.094696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.094725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.094886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.094915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.095088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.095119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.095273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.095316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.095480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.095509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.095696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.095723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.095908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.095938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.096128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.096158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.096323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.096349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.096520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.096549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.096715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.096745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.096885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.096911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.097040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.097067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.097263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.097293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.031 [2024-07-24 18:08:26.097461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.031 [2024-07-24 18:08:26.097488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.031 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.097685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.097714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.097893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.097920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.098099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.098149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.098325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.098352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.098478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.098505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.098661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.098688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.098858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.098887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.099046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.099075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.099258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.099285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.099459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.099672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.099699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.099841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.099872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.100069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.100098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.100275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.100304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.100474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.100500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.100628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.100672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.100836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.100865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.101907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.101934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.102062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.102088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.102286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.102316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.102486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.102516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.102713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.102739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.102900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.102929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.103115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.103144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.103318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.103345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.103498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.103524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.103696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.032 [2024-07-24 18:08:26.103739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.032 qpair failed and we were unable to recover it. 00:25:40.032 [2024-07-24 18:08:26.103908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.103937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.104141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.104168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.104302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.104329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.104473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.104694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.104723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.104889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.104918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.105084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.105117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.105256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.105282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.105432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.105475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.105645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.105671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.105803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.105829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.106000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.106027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.106198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.106225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.106396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.106422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.106567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.106610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.106806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.106832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.107001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.107030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.107194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.107223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.107420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.107634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.107663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.107804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.107834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.108930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.108959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.109134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.109161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.109358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.109388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.109526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.109555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.109729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.109756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.109921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.109950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.110116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.110146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.110318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.110344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.033 [2024-07-24 18:08:26.110499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.033 [2024-07-24 18:08:26.110529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.033 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.110664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.110694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.110840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.110867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.111018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.111062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.111234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.111264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.111450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.111477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.111677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.111707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.111870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.111899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.112095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.112130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.112297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.112324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.112530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.112556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.112703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.112730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.112863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.112889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.113063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.113096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.113323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.113350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.113544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.113574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.113771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.113798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.113948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.113975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.114136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.114314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.114468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.114659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.114820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.114996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.115152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.115342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.115522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.115701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.034 qpair failed and we were unable to recover it. 00:25:40.034 [2024-07-24 18:08:26.115875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.034 [2024-07-24 18:08:26.115905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.116081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.116112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.116264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.116310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.116461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.116491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.116689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.116715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.116876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.116905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.117092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.117305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.117455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.117633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.117813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.117978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.118007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.118209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.118236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.118393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.118420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.118567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.118593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.118747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.118789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.118984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.119206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.119384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.119612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.119803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.119966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.119996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.120191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.120218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.120341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.120368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.120521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.120548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.120759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.120786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.120958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.120991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.121126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.121156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.121355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.121381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.121555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.121586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.121758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.121784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.121933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.121960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.122152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.122182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.122374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.122404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.035 qpair failed and we were unable to recover it. 00:25:40.035 [2024-07-24 18:08:26.122602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.035 [2024-07-24 18:08:26.122628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.122839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.122866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.122984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.123011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.123166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.123194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.123394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.123423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.123593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.123623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.123801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.123828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.124007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.124036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.124166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.124196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.124366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.124393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.124517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.124560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.124808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.124838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.125041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.125259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.125289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.125482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.125511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.125661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.125687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.125884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.125913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.126076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.126255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.126435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.126621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.126830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.126961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.127175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.127330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.127524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.127714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.127916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.127943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.128072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.128099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.128331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.128361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.128530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.128557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.128737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.128782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.036 [2024-07-24 18:08:26.128952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.036 [2024-07-24 18:08:26.128978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.036 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.129160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.129187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.129348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.129377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.129564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.129782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.129809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.129982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.130173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.130378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.130534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.130711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.130902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.130929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.131081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.131113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.131332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.131362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.131529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.131556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.131723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.131752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.131922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.131951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.132132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.132159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.132316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.132342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.132522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.132549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.132699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.132855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.132882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.133881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.133923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.134099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.134131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.134309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.134340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.134511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.134540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.134731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.134761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.134937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.134963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.135130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.135160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.135353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.135379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.135533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.135560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.135688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.135857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.135884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.037 qpair failed and we were unable to recover it. 00:25:40.037 [2024-07-24 18:08:26.136066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.037 [2024-07-24 18:08:26.136093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.136299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.136329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.136482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.136511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.136677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.136703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.136853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.136880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.137933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.137959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.138138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.138165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.138331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.138358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.138502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.138529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.138705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.138735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.138901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.138927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.139082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.139114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.139241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.139268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.139441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.139472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.139647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.139676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.139842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.139871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.140066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.140250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.140398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.140573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.140788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.140977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.141172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.141353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.141531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.141734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.141891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.141917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.142053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.142080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.142238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.142265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.142439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.142482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.142653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.142679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.142824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.142850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.143001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.143044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.038 qpair failed and we were unable to recover it. 00:25:40.038 [2024-07-24 18:08:26.143229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.038 [2024-07-24 18:08:26.143256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.143373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.143400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.143573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.143600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.143754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.143797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.143972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.143998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.144149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.144192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.144359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.144389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.144584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.144611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.144808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.144837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.145933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.146140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.146167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.146292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.146319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.146472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.146515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.146682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.146711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.146910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.146936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.147948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.147974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.148161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.148191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.148368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.148573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.148600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.148754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.148781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.148976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.149005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.149178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.039 [2024-07-24 18:08:26.149205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.039 qpair failed and we were unable to recover it. 00:25:40.039 [2024-07-24 18:08:26.149401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.149430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.149597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.149626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.149773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.149800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.149992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.150021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.150198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.150226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.150374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.150401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.150553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.150580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.150769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.150798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.150992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.151194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.151372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.151565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.151768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.151969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.151999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.152193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.152221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.152404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.152431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.152586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.152635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.152814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.152841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.152990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.153017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.153194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.153223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.153373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.153401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.153575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.153618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.153777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.153806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.154179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.154357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.154578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.154733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.154953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.154983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.155129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.155156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.155351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.155381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.155569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.155598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.155772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.155798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.155920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.155947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.156077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.156109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.156322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.156348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.156523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.040 [2024-07-24 18:08:26.156552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.040 qpair failed and we were unable to recover it. 00:25:40.040 [2024-07-24 18:08:26.156691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.156720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.156899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.156926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.157908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.157950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.158962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.158988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.159140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.159167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.159312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.159338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.159490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.159517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.159645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.159673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.159861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.159890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.160930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.160957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.161932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.161960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.162147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.162188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.162347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.162375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.162537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.162584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.162793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.162838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.162990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.163017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.163185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.041 [2024-07-24 18:08:26.163231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.041 qpair failed and we were unable to recover it. 00:25:40.041 [2024-07-24 18:08:26.163383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.163428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.163602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.163646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.163789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.163834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.163967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.163995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.164191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.164237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.164409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.164458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.164745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.164808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.164985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.165157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.165379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.165551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.165729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.165931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.165958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.166162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.166193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.166357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.166386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.166647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.166695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.166874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.166901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.167034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.167060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.167269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.167314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.167494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.167540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.167714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.167762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.167942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.167969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.168097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.168129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.168294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.168343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.168523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.168551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.168721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.168764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.168942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.168969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.169094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.169128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.169258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.169303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.169475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.169519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.169694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.169738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.169887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.169915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.170067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.170095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.170242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.170288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.170445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.170473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.170626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.170654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.170829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.170859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.042 qpair failed and we were unable to recover it. 00:25:40.042 [2024-07-24 18:08:26.171041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.042 [2024-07-24 18:08:26.171069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.171216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.171261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.171436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.171482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.171680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.171725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.171884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.172090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.172124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.172298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.172342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.172540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.172585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.172762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.172806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.172932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.172959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.173147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.173175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.173346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.173391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.173542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.173587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.173737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.173768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.173933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.173961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.174124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.174153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.174330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.174377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.174548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.174591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.174763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.174807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.174983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.175011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.175164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.175195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.175380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.175424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.175570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.175617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.175797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.175824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.175975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.176003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.176183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.176229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.176429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.176474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.176650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.176699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.176851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.043 [2024-07-24 18:08:26.176878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.043 qpair failed and we were unable to recover it. 00:25:40.043 [2024-07-24 18:08:26.177000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.177027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.177206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.177251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.177418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.177463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.177618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.177662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.177814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.177841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.177991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.178018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.178190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.178235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.178445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.178488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.178675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.178720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.178878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.178905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.179060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.179088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.179264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.179309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.179472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.179517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.179684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.179729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.179906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.179933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.180077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.180110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.180285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.180485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.180529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.180707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.180752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.180928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.180954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.181125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.181153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.181351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.181398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.181578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.181623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.181750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.181778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.181940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.181974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.182167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.182197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.182368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.182412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.182566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.182610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.182743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.182770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.182930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.182958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.183160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.044 [2024-07-24 18:08:26.183205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.044 qpair failed and we were unable to recover it. 00:25:40.044 [2024-07-24 18:08:26.183413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.183457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.183636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.183664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.183818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.183845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.184964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.184991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.185161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.185189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.185372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.185417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.185622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.185840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.185867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.186019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.186047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.186217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.186263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.186419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.186464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.186639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.186687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.186867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.186894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.187027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.187054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.187234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.187277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.187458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.187504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.187707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.187752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.187902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.187928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.188049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.188075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.188284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.188329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.188499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.188546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.188712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.188755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.188904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.189087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.189121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.189268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.189298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.189460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.189505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.189701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.189745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.189919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.189947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.190091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.190129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.190283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.190488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.190532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.190708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.045 [2024-07-24 18:08:26.190756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.045 qpair failed and we were unable to recover it. 00:25:40.045 [2024-07-24 18:08:26.190891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.190919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.191098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.191132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.191332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.191378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.191572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.191602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.191798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.191844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.191971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.191999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.192191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.192340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.192506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.192666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.192876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.192999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.193034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.194307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.194341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.194536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.194581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.194714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.194741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.194866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.194894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.195055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.195082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.195275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.195324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.195505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.195551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.195718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.195765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.195914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.195942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.196118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.196147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.196333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.196362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.196515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.196546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.196756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.196800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.196982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.197185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.197412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.197601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.197813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.197965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.197992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.198191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.198238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.198361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.198389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.198577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.198621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.198750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.198778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.198954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.198981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.199135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.199185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.199377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.199407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.199620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.199665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.046 qpair failed and we were unable to recover it. 00:25:40.046 [2024-07-24 18:08:26.199800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.046 [2024-07-24 18:08:26.199828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.199983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.200011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.201036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.201067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.201268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.201315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.201505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.201533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.201665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.201693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.201846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.201873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.202004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.202032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.202197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.202225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.202351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.202378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.202642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.202698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.202858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.202885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.203008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.203035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.203203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.203249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.203422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.203472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.203676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.203721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.203901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.203928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.205049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.205081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.205285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.205330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.205503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.205547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.205733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.205789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.206952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.206984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.207216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.207263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.207408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.207440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.207757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.207813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.207965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.207992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.209200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.209233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.209423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.209470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.209646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.209692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.209847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.209885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.210019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.210048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.210243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.210288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.210475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.210519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.210695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.210739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.210868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.210897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.211022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.211050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.211224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.211436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.211495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.211698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.211741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.211917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.211944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.212077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.047 [2024-07-24 18:08:26.212113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.047 qpair failed and we were unable to recover it. 00:25:40.047 [2024-07-24 18:08:26.212290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.212337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.212554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.212581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.212815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.212859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.213063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.213288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.213439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.213678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.213834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.213994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.214191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.214362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.214571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.214779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.214972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.214999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.215158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.215204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.215376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.215419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.215575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.215609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.215764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.215795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.216911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.216943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.217158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.217186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.217365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.217393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.217570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.217600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.217797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.217827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.218109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.218156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.218304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.218332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.218569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.218599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.218842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.218901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.219094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.219132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.219278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.219306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.219481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.048 [2024-07-24 18:08:26.219514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.048 qpair failed and we were unable to recover it. 00:25:40.048 [2024-07-24 18:08:26.219670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.219701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.219872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.219906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.220114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.220142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.220264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.220292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.220444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.220475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.220628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.220671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.220823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.220854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.221920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.221950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.222158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.222185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.222336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.222363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.222533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.222564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.222745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.222774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.222958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.222989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.223163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.223191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.223318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.223346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.223554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.223586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.223778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.223808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.223983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.224017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.224201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.224228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.224363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.224401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.224596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.224638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.224792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.224835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.225013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.225044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.225227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.225270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.225424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.225457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.225680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.225710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.225862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.225906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.226081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.226115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.226234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.226260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.226426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.226456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.226603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.226792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.226821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.049 qpair failed and we were unable to recover it. 00:25:40.049 [2024-07-24 18:08:26.227947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.049 [2024-07-24 18:08:26.227976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.228194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.228221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.228373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.228428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.228594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.228623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.228788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.228817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.228947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.228976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.229160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.229188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.229338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.229365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.230239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.230271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.230433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.230460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.230615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.230645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.230836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.230877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.231056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.231082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.231239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.231272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.231413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.231650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.231682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.231870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.231899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.232113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.232157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.232292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.232318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.232510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.232539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.232831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.233026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.233055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.233259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.233286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.233416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.233441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.233619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.233668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.233888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.233942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.234152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.234342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.234369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.234498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.234524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.234681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.234706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.234862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.234888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.235041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.235067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.235239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.235265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.235421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.235450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.235647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.235701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.235866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.235907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.236042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.236070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.236275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.236301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.236428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.236454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.236628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.236664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.236861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.236893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.237094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.237126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.237246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.237273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.237415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.237444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.237631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.050 [2024-07-24 18:08:26.237660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.050 qpair failed and we were unable to recover it. 00:25:40.050 [2024-07-24 18:08:26.237844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.237895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.238069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.238113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.238257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.238286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.238467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.238514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.238706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.238754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.238898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.238924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.239069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.239094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.239266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.239309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.239496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.239526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.239816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.239880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.240095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.240133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.240308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.240339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.240505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.240532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.240693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.240720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.240874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.240902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.241044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.241072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.241269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.241299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.241525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.241576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.241744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.241789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.241969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.241996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.242175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.242206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.242374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.242413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.242600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.242635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.242937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.243008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.243194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.243226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.243422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.243452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.243665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.243719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.243918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.243959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.244142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.244172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.244307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.244335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.244481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.244508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.244660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.244689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.244858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.245054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.245082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.245263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.245292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.245478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.245507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.245654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.245683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.245894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.245923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.246082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.246116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.246294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.246320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.246517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.246561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.246820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.246849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.247009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.247051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.051 qpair failed and we were unable to recover it. 00:25:40.051 [2024-07-24 18:08:26.247218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.051 [2024-07-24 18:08:26.247245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.052 qpair failed and we were unable to recover it. 00:25:40.052 [2024-07-24 18:08:26.247393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.052 [2024-07-24 18:08:26.247422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.052 qpair failed and we were unable to recover it. 00:25:40.052 [2024-07-24 18:08:26.247658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.052 [2024-07-24 18:08:26.247707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.052 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.247836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.247866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.248028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.248057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.248232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.248260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.248382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.248411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.248622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.248651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.248823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.248856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.249641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.249674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.249883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.249913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.250059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.250089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.250258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.250284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.251626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.251661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.251858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.251890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.252057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.252084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.252250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.252276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.252417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.252444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.253210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.253241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.253382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.253410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.254305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.254336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.254581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.254633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.254805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.254835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.255033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.255059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.332 [2024-07-24 18:08:26.255226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.332 [2024-07-24 18:08:26.255253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.332 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.255407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.255454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.255643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.255688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.255821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.255847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.256000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.256027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.256189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.256216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.256362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.256392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.256575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.256604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.256825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.256854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.257031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.257058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.257254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.257284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.257453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.257483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.257645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.257675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.257871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.257897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.258029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.258055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.258199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.258226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.258425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.258458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.258670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.258717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.258913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.258942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.259163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.259192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.259354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.259383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.259598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.259628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.259788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.259817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.259969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.260959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.260985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.261117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.261144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.261298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.261324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.261508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.261535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.261710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.261738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.261878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.261904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.262065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.262092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.262249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.262278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.333 qpair failed and we were unable to recover it. 00:25:40.333 [2024-07-24 18:08:26.262461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.333 [2024-07-24 18:08:26.262490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.262714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.262761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.262956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.262991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.263111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.263137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.263306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.263335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.263548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.263593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.263819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.263875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.264025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.264051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.264216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.264247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.264420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.264449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.264669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.264695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.264917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.264944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.265098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.265148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.265341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.265375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.265578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.265633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.265806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.265832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.265979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.266005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.266152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.266182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.266345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.266375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.266614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.266673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.266839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.266867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.267032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.267058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.267214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.267242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.267374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.267401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.267550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.267576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.267792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.267842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.268010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.268039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.268221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.268265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.268501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.268568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.268761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.268813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.268975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.269002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.269130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.269159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.269294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.269322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.269468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.269495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.269801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.269855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.269988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.270015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.270188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.334 [2024-07-24 18:08:26.270216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.334 qpair failed and we were unable to recover it. 00:25:40.334 [2024-07-24 18:08:26.270370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.270414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.270585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.270632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.270792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.270818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.270971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.271193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.271388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.271623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.271777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.271937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.271964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.272135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.272162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.272312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.272357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.272523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.272567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.272695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.272722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.272881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.273035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.273062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.273242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.273287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.273455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.273497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.273696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.273723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.273878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.273915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.274093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.274293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.274338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.274512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.274558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.274814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.274866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.275022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.275049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.275215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.275261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.275393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.275431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.275625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.275821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.275857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.276006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.276035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.276204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.276234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.276388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.276418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.276593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.276622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.276787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.276818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.277010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.277040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.277221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.277249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.277373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.277428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.335 [2024-07-24 18:08:26.277642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.335 [2024-07-24 18:08:26.277672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.335 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.277842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.277872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.278085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.278125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.278276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.278303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.278486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.278530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.278747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.278777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.278980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.279009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.279190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.279224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.279359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.279385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.279562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.279592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.279861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.279916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.280115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.280160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.280308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.280338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.280536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.280565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.280882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.280947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.281129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.281156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.281313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.281342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.281558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.281594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.281845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.281900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.282112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.282139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.282282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.282310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.282489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.282536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.282778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.282828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.283004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.283029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.283213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.283243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.283426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.283470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.283734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.283780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.283914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.283940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.284099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.284276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.284303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.284451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.284480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.284679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.284734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.284896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.284925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.285066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.285092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.285245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.285428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.285458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.285692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.285737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.285901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.336 [2024-07-24 18:08:26.285932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.336 qpair failed and we were unable to recover it. 00:25:40.336 [2024-07-24 18:08:26.286108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.286134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.286313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.286342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.286539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.286569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.286717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.286743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.286900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.286926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.287076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.287112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.287267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.287310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.287543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.287590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.287785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.287834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.288004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.288261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.288316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.288530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.288580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.288741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.288789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.288943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.288969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.289097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.289141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.289308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.289352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.289518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.289731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.289783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.289941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.289969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.290125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.290152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.290304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.290348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.290523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.290570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.290783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.290829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.290961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.290993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.291171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.291216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.291372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.291427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.291591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.291636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.291791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.291817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.291998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.292024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.337 [2024-07-24 18:08:26.292207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.337 [2024-07-24 18:08:26.292248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.337 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.292410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.292438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.292592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.292618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.292821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.292870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.293020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.293071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.293226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.293256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.293415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.293444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.293713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.293766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.293954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.293980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.294123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.294167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.294286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.294329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.294475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.294504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.294729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.294785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.294982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.295008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.295163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.295190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.295413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.295461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.295684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.295730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.295913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.295941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.296089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.296122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.296252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.296278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.296449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.296474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.296817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.296871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.297066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.297109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.297257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.297283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.297488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.297547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.297830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.297877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.298056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.298081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.298252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.298292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.298462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.298490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.298652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.298698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.298887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.298932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.299115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.299142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.299279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.299305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.299454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.299497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.299724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.299771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.299931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.299958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.300145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.300191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.300356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.338 [2024-07-24 18:08:26.300403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.338 qpair failed and we were unable to recover it. 00:25:40.338 [2024-07-24 18:08:26.300603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.300651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.300836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.300885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.301046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.301073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.301228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.301273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.301454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.301499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.301714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.301774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.301899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.301926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.302082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.302270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.302315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.302530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.302573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.302853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.302899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.303028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.303054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.303204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.303248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.303409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.303436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.303609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.303844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.303871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.304021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.304047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.304199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.304244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.304415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.304458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.304609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.304652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.304829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.304863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.305011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.305036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.305235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.305279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.305453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.305502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.305723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.305768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.305894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.305921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.306052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.306077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.306241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.306286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.306456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.306499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.306708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.306735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.306897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.306923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.307109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.307289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.307444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.307629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.307869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.339 [2024-07-24 18:08:26.307995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.339 [2024-07-24 18:08:26.308022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.339 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.308202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.308248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.308417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.308480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.308677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.308720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.308877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.308903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.309115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.309171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.309326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.309358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.309532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.309562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.309765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.309816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.310026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.310055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.310225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.310251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.310405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.310433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.310620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.310669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.310871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.310926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.311148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.311176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.311311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.311339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.311500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.311526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.311680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.311706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.311832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.311859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.312015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.312041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.312210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.312241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.312389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.312419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.312589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.312620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.312777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.312827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.313006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.313035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.313221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.313396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.313426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.313604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.313638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.313789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.313832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.314023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.314049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.314216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.314243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.314412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.314441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.314577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.314606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.314774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.314804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.315000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.315028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.315200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.315226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.315355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.340 [2024-07-24 18:08:26.315397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.340 qpair failed and we were unable to recover it. 00:25:40.340 [2024-07-24 18:08:26.315596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.315784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.315813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.315970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.316165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.316354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.316524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.316719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.316914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.316944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.317158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.317185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.317341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.317367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.317556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.317582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.317760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.317789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.317950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.317979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.318131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.318157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.318290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.318316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.318500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.318526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.318755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.318783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.318968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.318995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.319156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.319183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.319333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.319358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.319502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.319532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.319696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.319725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.319926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.319955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.320162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.320189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.320336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.320362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.320491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.320516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.320698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.320727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.320890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.320919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.321124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.321278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.321304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.321487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.321516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.341 qpair failed and we were unable to recover it. 00:25:40.341 [2024-07-24 18:08:26.321710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.341 [2024-07-24 18:08:26.321739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.321911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.321941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.322120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.322174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.322355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.322394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.322581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.322626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.322805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.322850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.323007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.323034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.323219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.323247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.323406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.323433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.326251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.326292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.326493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.326522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.326702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.326747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.326938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.326982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.327129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.327158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.327337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.327381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.327562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.327609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.327787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.327831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.328006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.328033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.328218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.328263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.328439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.328483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.328653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.328682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.328847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.328874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.329927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.330112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.330139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.330304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.330331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.330503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.330676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.330703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.330856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.330882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.331054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.331080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.331243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.331288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.331459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.331502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.331701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.331745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.342 [2024-07-24 18:08:26.331877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.342 [2024-07-24 18:08:26.331904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.342 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.332894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.332920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.333097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.333130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.333279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.333322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.333537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.333581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.333745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.333788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.333941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.333967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.334098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.334133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.334708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.334738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.334898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.334925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.335113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.335141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.335298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.335342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.335523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.335567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.335773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.335818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.335941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.335968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.336147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.336189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.336384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.336429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.336630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.336674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.336825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.336851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.336985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.337011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.337199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.337227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.337369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.337413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.337591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.337634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.337792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.337818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.337992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.338199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.338396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.338583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.338805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.338956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.338982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.339153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.339182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.339349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.339379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.339577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.343 [2024-07-24 18:08:26.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.343 qpair failed and we were unable to recover it. 00:25:40.343 [2024-07-24 18:08:26.339756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.339783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.339906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.339933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.340086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.340123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.340295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.340340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.340516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.340564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.340749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.340776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.340897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.340924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.341114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.341141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.341296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.341341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.341519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.341547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.341700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.341743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.341897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.341924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.342115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.342142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.342293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.342338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.342505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.342550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.342747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.342792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.342939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.342967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.343122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.343149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.343308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.343351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.343518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.343562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.343761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.343806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.343937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.343963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.344117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.344144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.344313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.344357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.344539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.344583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.344759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.344786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.344941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.344968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.345166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.345210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.345364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.345408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.345587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.345635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.345781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.345963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.345993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.346190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.346234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.346407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.346450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.346627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.346669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.346849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.346875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.347053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.344 [2024-07-24 18:08:26.347079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.344 qpair failed and we were unable to recover it. 00:25:40.344 [2024-07-24 18:08:26.347264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.347308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.347450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.347495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.347672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.347716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.347870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.347897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.348045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.348070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.348227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.348272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.348450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.348497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.348697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.348741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.348895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.348920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.349073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.349098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.349271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.349315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.349528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.349767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.349797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.349968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.349994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.350140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.350166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.350323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.350367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.350580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.350623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.350822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.350851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.351043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.351070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.351230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.351260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.351444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.351487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.351665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.351708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.351880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.351905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.352031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.352057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.352245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.352288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.352455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.352497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.352655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.352698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.352854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.352881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.353030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.353227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.353406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.353636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.353835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.353988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.354014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.354157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.354187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.354337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.354363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.345 qpair failed and we were unable to recover it. 00:25:40.345 [2024-07-24 18:08:26.354506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.345 [2024-07-24 18:08:26.354533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.354714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.354740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.354881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.354922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.355070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.355098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.355289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.355318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.355457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.355486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.355628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.355657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.355821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.355850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.356020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.356047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.356232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.356262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.356479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.356522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.356672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.356715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.356921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.356965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.357118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.357145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.357300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.357348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.357538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.357582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.357753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.357944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.357969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.358176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.358221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.358421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.358464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.358642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.358687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.358843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.358870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.359943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.359969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.360107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.360134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.360302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.360347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.360565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.360607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.360755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.360780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.346 qpair failed and we were unable to recover it. 00:25:40.346 [2024-07-24 18:08:26.360926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.346 [2024-07-24 18:08:26.360951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.361137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.361339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.361705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.361859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.361987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.362171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.362373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.362570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.362748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.362930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.362956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.363085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.363116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.363294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.363320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.363539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.363566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.363718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.363744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.363874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.363901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.364078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.364108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.364281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.364324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.364508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.364552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.364758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.364802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.364956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.364982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.365117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.365145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.365319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.365536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.365580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.365734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.365761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.365938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.365964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.366129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.366157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.366325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.366368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.366537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.366580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.366713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.366898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.366924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.367080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.367110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.367267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.367311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.367526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.367569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.367762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.367805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.367977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.368003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.347 [2024-07-24 18:08:26.368177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.347 [2024-07-24 18:08:26.368222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.347 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.368398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.368427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.368633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.368676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.368811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.368837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.369037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.369243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.369469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.369682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.369824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.369982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.370013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.370220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.370264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.370442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.370484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.370664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.370707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.370861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.370886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.371063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.371272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.371316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.371516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.371559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.371728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.371754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.371908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.371935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.372112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.372137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.372304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.372348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.372523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.372566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.372743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.372787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.372917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.372943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.373129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.373155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.373299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.373347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.373549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.373592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.373744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.373945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.373971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.374098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.374131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.374312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.374357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.374513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.374557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.374764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.374807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.348 qpair failed and we were unable to recover it. 00:25:40.348 [2024-07-24 18:08:26.374940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.348 [2024-07-24 18:08:26.374966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.375119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.375146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.375322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.375366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.375572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.375616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.375766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.375792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.375965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.375991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.376151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.376179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.376342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.376388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.376523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.376565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.376700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.376725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.376879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.376905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.377054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.377079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.377250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.377293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.377494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.377538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.377704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.377747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.377904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.377931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.378122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.378152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.378307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.378351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.378530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.378573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.378747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.378790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.378967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.378993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.379122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.379149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.379293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.379337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.379501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.379545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.379723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.379772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.379951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.379976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.380097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.380128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.380280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.380310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.380495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.380537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.380662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.380687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.380849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.380875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.381050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.381076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.381219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.381262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.381461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.381505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.381676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.381719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.381870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.381895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.382071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.382097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.349 qpair failed and we were unable to recover it. 00:25:40.349 [2024-07-24 18:08:26.382275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.349 [2024-07-24 18:08:26.382319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.382520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.382563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.382741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.382790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.382966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.382991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.383168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.383198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.383391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.383436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.383647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.383690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.383841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.383868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.384027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.384053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.384259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.384302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.384482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.384524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.384702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.384745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.384896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.384922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.385065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.385249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.385292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.385474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.385517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.385714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.385756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.385912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.385939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.386124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.386150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.386298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.386327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.386486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.386512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.386661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.386705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.386859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.386884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.387040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.387066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.387252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.387297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.387473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.387516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.387726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.387769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.387923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.387949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.388098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.388130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.388294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.388339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.388516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.388562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.388705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.388747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.388902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.388929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.389065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.350 [2024-07-24 18:08:26.389091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.350 qpair failed and we were unable to recover it. 00:25:40.350 [2024-07-24 18:08:26.389267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.389312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.389518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.389562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.389740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.389782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.389956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.389982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.390111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.390137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.390285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.390328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.390478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.390523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.390722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.390765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.390915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.390940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.391114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.391144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.391315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.391358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.391553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.391595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.391775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.391822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.392000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.392025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.392196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.392241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.392386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.392430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.392606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.392650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.392826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.392852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.393031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.393250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.393429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.393644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.393828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.393976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.394124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.394326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.394527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.394724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.394902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.394928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.395076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.395107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.351 [2024-07-24 18:08:26.395253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.351 [2024-07-24 18:08:26.395296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.351 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.395446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.395491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.395658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.395686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.395861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.395887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.396044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.396070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.396249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.396294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.396428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.396456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.396660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.396704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.396852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.396878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.397030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.397057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.397242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.397286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.397460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.397505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.397682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.397725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.397870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.397896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.398048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.398074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.398254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.398298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.398477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.398526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.398702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.398745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.398921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.398946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.399119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.399162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.399307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.399349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.399526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.399574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.399747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.399790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.399941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.399967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.400134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.400160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.400350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.400393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.400544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.400573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.400750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.400776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.400910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.400937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.401081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.401112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.401279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.401323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.401525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.352 [2024-07-24 18:08:26.401554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.352 qpair failed and we were unable to recover it. 00:25:40.352 [2024-07-24 18:08:26.401741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.401783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.401955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.401980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.402136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.402163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.402361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.402583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.402627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.402777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.402820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.402950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.402975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.403114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.403142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.403329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.403356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.403525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.403569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.403779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.403822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.403978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.404135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.404323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.404555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.404769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.404944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.404970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.405151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.405182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.405395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.405439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.405612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.405655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.405835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.405861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.406010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.406037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.406188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.406231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.406430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.406474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.406649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.406693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.406822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.406848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.407972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.407999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.408172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.408216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.408416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.408458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.408609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.408651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.408804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.408830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.408946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.408972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.353 qpair failed and we were unable to recover it. 00:25:40.353 [2024-07-24 18:08:26.409095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.353 [2024-07-24 18:08:26.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.409304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.409347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.409493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.409539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.409717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.409742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.409896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.410078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.410119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.410297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.410346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.410501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.410545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.410723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.410765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.410882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.410908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.411065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.411093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.411266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.411310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.411474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.411518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.411699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.411743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.411864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.411891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.412039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.412065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.412246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.412292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.412443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.412486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.412663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.412711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.412867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.412894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.413045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.413070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.413244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.413288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.413496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.413539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.413717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.413764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.413913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.413940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.414066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.414092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.414301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.414344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.414503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.414548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.414701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.414728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.414903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.414929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.415084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.415116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.415288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.415331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.415539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.415583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.354 [2024-07-24 18:08:26.415790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.354 [2024-07-24 18:08:26.415834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.354 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.415985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.416012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.416197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.416241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.416387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.416430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.416604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.416648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.416823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.416848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.416997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.417022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.417186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.417231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.417409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.417453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.417654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.417698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.417826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.417852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.418034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.418059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.418239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.418285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.418460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.418508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.418690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.418734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.418905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.418931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.419062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.419089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.419272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.419314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.419511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.419540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.419730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.419773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.419902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.419928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.420097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.420128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.420294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.420337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.420515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.420559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.420709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.420756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.420910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.420935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.421083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.421113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.421301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.421328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.421523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.421566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.421710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.421754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.421901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.421927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.422113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.422140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.422288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.422332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.422471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.422514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.422652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.422696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.422878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.422922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.423077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.423110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.423271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.355 [2024-07-24 18:08:26.423314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.355 qpair failed and we were unable to recover it. 00:25:40.355 [2024-07-24 18:08:26.423484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.423527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.423724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.423768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.423926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.423952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.424167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.424193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.424391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.424434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.424613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.424663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.424783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.424809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.424958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.424985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.425185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.425230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.425404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.425448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.425617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.425660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.425840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.425865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.426039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.426064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.426238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.426281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.426433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.426478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.426677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.426725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.426898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.426925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.427112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.427139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.427305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.427348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.427524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.427574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.427750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.427792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.427967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.427994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.428149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.428176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.428381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.428592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.428635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.428811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.428857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.429012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.429039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.429249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.429279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.429495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.429537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.429723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.429767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.429944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.429970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.356 qpair failed and we were unable to recover it. 00:25:40.356 [2024-07-24 18:08:26.430124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.356 [2024-07-24 18:08:26.430150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.430325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.430369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.430579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.430622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.430806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.430850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.431944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.431969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.432096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.432137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.432313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.432363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.432565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.432609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.432784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.432980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.433006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.433161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.433192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.433367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.433412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.433584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.433626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.433799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.433824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.433999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.434218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.434264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.434427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.434470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.434650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.434694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.434843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.434868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.435015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.435040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.435220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.435265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.435468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.435511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.435682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.435727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.435858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.435884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.436018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.436043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.436220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.436264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.436474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.436518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.436663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.436707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.436861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.436886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.437045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.437071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.437220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.437263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.437461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.437504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.437680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.357 [2024-07-24 18:08:26.437726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.357 qpair failed and we were unable to recover it. 00:25:40.357 [2024-07-24 18:08:26.437880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.438079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.438111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.438286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.438329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.438478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.438522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.438718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.438762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.438911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.438937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.439082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.439114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.439284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.439328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.439484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.439527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.439673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.439717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.439878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.439904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.440026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.440052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.440264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.440308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.440485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.440535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.440690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.440732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.440884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.440911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.441057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.441084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.441247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.441291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.441461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.441506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.441704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.441748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.441930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.441957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.442113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.442140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.442316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.442359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.442558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.442600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.442728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.442753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.442900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.442927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.443056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.443083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.443260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.443304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.443478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.443523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.443701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.443745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.443901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.443926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.444077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.444116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.444298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.444348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.444516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.444561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.444737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.444782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.444914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.444940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.445092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.445126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.358 qpair failed and we were unable to recover it. 00:25:40.358 [2024-07-24 18:08:26.445301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.358 [2024-07-24 18:08:26.445345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.445494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.445537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.445683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.445726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.445882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.445910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.446042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.446068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.446261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.446305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.446454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.446498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.446654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.446698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.446875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.446900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.447059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.447085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.447298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.447343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.447509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.447553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.447734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.447780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.447935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.447961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.448128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.448155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.448331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.448378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.448560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.448608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.448752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.448794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.448917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.448943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.449097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.449131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.449280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.449323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.449485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.449528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.449682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.449708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.449854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.449880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.450006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.450031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.450215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.450260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.450396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.450440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.450649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.450692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.450884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.450911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.451059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.451085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.451223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.451250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.451436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.451480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.451647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.451692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.451845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.451870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.452019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.452045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.452221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.452265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.452411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.359 [2024-07-24 18:08:26.452456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.359 qpair failed and we were unable to recover it. 00:25:40.359 [2024-07-24 18:08:26.452637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.452681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.452833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.452859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.453054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.453080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.453268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.453312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.453486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.453733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.453776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.453967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.453994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.454119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.454145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.454318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.454363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.454563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.454607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.454786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.454829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.454958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.454985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.455162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.455192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.455358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.455402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.455568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.455612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.455737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.455763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.455908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.455933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.456111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.456137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.456280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.456310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.456477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.456525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.456670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.456713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.456865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.456892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.457042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.457069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.457237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.457263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.457431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.457634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.457660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.457806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.457832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.458012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.458038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.458214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.458257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.458435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.458478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.458623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.458668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.458824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.458850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.459001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.459026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.459177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.459223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.459369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.459412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.459581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.459623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.360 [2024-07-24 18:08:26.459779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.360 [2024-07-24 18:08:26.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.360 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.459952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.459978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.460113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.460139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.460300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.460343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.460551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.460594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.460774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.460800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.460981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.461165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.461373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.461563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.461760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.461942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.461967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.462128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.462154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.462333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.462375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.462543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.462586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.462724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.462751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.462869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.462894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.463048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.463074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.463225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.463269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.463433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.463477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.463635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.463678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.463859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.463885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.464037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.464063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.464209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.464259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.464438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.464486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.464664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.464708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.464884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.464910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.465046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.465074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.465220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.465264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.465417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.465459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.465636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.465679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.465829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.361 [2024-07-24 18:08:26.465856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.361 qpair failed and we were unable to recover it. 00:25:40.361 [2024-07-24 18:08:26.466004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.466029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.466210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.466252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.466459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.466502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.466652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.466696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.466843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.466870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.467052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.467474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.467698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.467852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.467987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.468141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.468339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.468561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.468787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.468964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.468991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.469163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.469193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.469379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.469576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.469604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.469767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.469792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.469941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.469967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.470118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.470144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.470295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.470324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.470599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.470644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.470765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.470791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.470949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.470974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.471126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.471152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.471321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.471350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.471523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.471567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.471710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.471735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.471887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.471913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.472072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.472108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.472288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.472316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.472499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.472542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.472698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.472725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.472855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.362 [2024-07-24 18:08:26.472882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.362 qpair failed and we were unable to recover it. 00:25:40.362 [2024-07-24 18:08:26.473057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.473083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.473238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.473282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.473448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.473493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.473662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.473706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.473859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.473885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.474071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.474097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.474315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.474360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.474535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.474564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.474707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.474733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.474885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.474911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.475063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.475089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.475263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.475307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.475479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.475522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.475670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.475723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.475850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.475876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.476035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.476061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.476246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.476291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.476467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.476511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.476688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.476731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.476910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.476937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.477098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.477131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.477288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.477332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.477508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.477551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.477727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.477771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.477903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.477930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.478067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.478092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.478283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.478326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.478496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.478540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.478719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.478763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.478939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.479119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.479147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.479326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.479368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.479550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.479592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.479788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.479832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.479967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.479995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.480182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.480231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.480442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.480661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.363 [2024-07-24 18:08:26.480706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.363 qpair failed and we were unable to recover it. 00:25:40.363 [2024-07-24 18:08:26.480865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.480890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.481067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.481093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.481245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.481287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.481501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.481544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.481694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.481737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.481921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.482073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.482098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.482278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.482321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.482530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.482574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.482753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.482796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.482944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.482969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.483113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.483140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.483345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.483387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.483578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.483622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.483772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.483815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.483993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.484018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.484219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.484265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.484469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.484512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.484682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.484727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.484886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.484911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.485069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.485095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.485294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.485337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.485483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.485526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.485708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.485738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.485935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.485961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.486082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.486113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.486270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.486314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.486518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.486561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.486719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.486745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.486935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.486961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.487092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.487126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.487298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.487341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.487508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.487551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.487723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.487767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.487896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.487922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.488071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.488096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.364 [2024-07-24 18:08:26.488244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.364 [2024-07-24 18:08:26.488288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.364 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.488463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.488514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.488720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.488749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.488916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.488942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.489070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.489096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.489272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.489316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.489485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.489529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.489687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.489713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.489862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.489888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.490069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.490095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.490266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.490294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.490466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.490510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.490710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.490753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.490910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.490935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.491098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.491130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.491335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.491380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.491546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.491590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.491758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.491801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.491955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.491981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.492148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.492178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.492373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.492420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.492572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.492616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.492742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.492769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.492902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.492929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.493076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.493106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.493312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.493354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.493538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.493581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.493755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.493797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.493920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.493947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.494151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.494180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.494357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.494400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.494578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.494621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.365 qpair failed and we were unable to recover it. 00:25:40.365 [2024-07-24 18:08:26.494788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.365 [2024-07-24 18:08:26.494815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.494972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.494998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.495123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.495148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.495325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.495369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.495516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.495560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.495725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.495750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.495897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.495923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.496124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.496151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.496311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.496355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.496536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.496584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.496761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.496805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.496956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.496982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.497163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.497190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.497318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.497344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.497495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.497520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.497673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.497700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.497839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.497864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.498963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.498991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.499120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.499164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.499325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.499369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.499557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.499601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.499725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.499752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.499904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.499931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.500064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.500089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.500297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.500342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.500504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.500548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.500749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.500792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.500941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.500966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.501085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.501117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.501274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.501300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.501442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.501489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.366 [2024-07-24 18:08:26.501693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.366 [2024-07-24 18:08:26.501737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.366 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.501911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.501937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.502136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.502163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.502332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.502358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.502566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.502609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.502764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.502806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.502960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.502986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.503111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.503136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.503285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.503315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.503506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.503533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.503702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.503744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.503894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.503919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.504039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.504066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.504244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.504295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.504497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.504541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.504709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.504735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.504883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.504909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.505067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.505094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.505250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.505298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.505476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.505518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.505664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.505708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.505837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.505864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.506002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.506028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.506199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.506241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.506413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.506455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.506612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.506656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.506828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.506854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.507011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.507037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.507233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.507279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.507429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.507472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.507644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.507691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.507873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.507898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.508041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.508260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.508411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.508657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.367 [2024-07-24 18:08:26.508810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.367 qpair failed and we were unable to recover it. 00:25:40.367 [2024-07-24 18:08:26.508957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.508983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.509099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.509129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.509284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.509312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.509549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.509596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.509753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.509783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.509966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.509993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.510112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.510166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.510333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.510362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.510529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.510557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.510697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.510725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.510891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.510920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.511056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.511086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.511257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.511495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.511522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.511726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.511770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.511898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.511923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.512049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.512074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.512269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.512313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.512454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.512485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.512641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.512682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.512845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.512871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.513021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.513046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.513221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.513247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.513466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.513517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.513651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.513680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.513810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.513839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.514032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.514175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.514348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.514562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.514822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.514987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.515016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.515172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.515199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.515346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.515371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.515545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.515574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.515810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.515839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.368 [2024-07-24 18:08:26.515992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-07-24 18:08:26.516017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.368 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.516175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.516201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.516344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.516375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.516516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.516544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.516688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.516716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.516849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.516878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.517018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.517045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.517175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.517201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.517383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.517409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.517623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.517675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.517839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.517867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.518001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.518030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.518198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.518224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.518373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.518399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.518572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.518598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.518823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.518873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.519041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.519069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.519223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.519251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.519424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.519466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.519606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.519635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.519829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.519858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.520962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.520990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.521181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.521221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.521358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.521386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.521563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.521606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.521829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.521878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.522021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.522050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.522233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.522260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.522454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.522483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.522623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.522653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.522839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.522897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.523062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.523092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.523248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-07-24 18:08:26.523274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.369 qpair failed and we were unable to recover it. 00:25:40.369 [2024-07-24 18:08:26.523475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.523504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.523750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.523797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.523934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.523963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.524160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.524187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.524316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.524345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.524541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.524571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.524736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.524766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.524913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.524944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.525086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.525122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.525289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.525315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.525449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.525496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.525658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.525687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.525843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.525869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.526952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.526979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.527147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.527177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.527334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.527363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.527560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.527587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.527704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.527747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.527888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.527917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.528079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.528109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.528266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.528292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.528469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.528495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.528670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.528696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.528902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.528952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.529158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.529185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.529339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.529365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.529545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.529573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.370 [2024-07-24 18:08:26.529712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-07-24 18:08:26.529741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.370 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.529906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.529932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.530124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.530154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.530314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.530343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.530516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.530543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.530691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.530721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.530934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.530962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.531111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.531137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.531266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.531292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.531444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.531470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.531643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.531668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.531848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.531876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.532937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.532980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.533171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.533200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.533373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.533399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.533560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.533589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.533740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.533765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.533893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.533919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.534094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.534126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.534275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.534304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.534475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.534501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.534690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.534742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.534913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.534942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.535154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.535302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.535478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.535698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.535855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.535981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.536140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.536364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.536581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.536754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.371 qpair failed and we were unable to recover it. 00:25:40.371 [2024-07-24 18:08:26.536909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-07-24 18:08:26.536935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.537089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.537132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.537297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.537323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.537451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.537477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.537624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.537649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.537837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.538054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.538244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.538430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.538631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.538992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.539018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.539166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.539193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.539385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.539410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.539560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.539585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.539786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.539814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.540011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.540040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.540192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.540219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.540396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.540437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.540639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.540668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.540833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.540859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.541907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.541933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.542077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.542117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.542269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.542470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.542496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.542667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.542696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.542857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.542885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.372 qpair failed and we were unable to recover it. 00:25:40.372 [2024-07-24 18:08:26.543971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.372 [2024-07-24 18:08:26.543996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.544120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.544147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.544293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.544335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.544480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.544509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.544701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.544727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.544924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.544953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.545150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.545179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.545378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.545403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.545572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.545601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.545764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.545793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.545956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.545981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.546147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.546176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.546348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.546377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.546523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.546549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.546680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.546721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.546910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.546939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.547115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.547142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.547333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.547362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.547518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.547543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.547687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.547712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.547853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.547878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.548045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.548074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.548240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.548267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.548465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.548494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.548624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.548849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.548875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.549050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.549079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.549255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.549281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.549434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.549459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.549578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.549622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.549811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.549837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.550011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.550037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.550212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.550239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.550391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.550434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.550625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.550651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.550842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.550871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.551069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.551094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.373 [2024-07-24 18:08:26.551253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.373 [2024-07-24 18:08:26.551279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.373 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.551424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.551465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.551594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.551628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.551795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.551821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.551965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.551990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.552141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.552184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.552381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.552406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.552537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.552566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.552706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.552734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.552875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.552900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.553090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.553267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.553437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.553622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.553795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.553977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.554183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.554336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.554529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.554708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.554905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.554933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.555110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.555137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.555285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.555314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.555509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.555538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.555716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.555742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.555891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.555917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.556970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.556998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.557164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.557193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.557367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.557394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.557520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.557545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.557664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.557691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.557866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.557892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.558018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.558043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.558219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.558245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.374 [2024-07-24 18:08:26.558374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.374 [2024-07-24 18:08:26.558399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.374 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.558524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.558549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.558700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.558726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.558869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.558895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.559099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.559266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.559407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.559626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.559848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.559992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.560018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.560162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.560189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.560388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.560417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.560588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.560614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.560810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.560839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.560980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.561009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.561176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.561203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.561334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.561378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.561561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.561587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.561767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.561793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.561992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.562021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.562187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.562216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.562367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.562393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.562582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.562611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.562779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.562807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.562978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.563138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.563339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.563530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.563705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.563893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.563918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.564089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.564119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.564289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.564323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.564455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.564483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.564646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.564672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.564826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.564851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.565041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.375 [2024-07-24 18:08:26.565069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.375 qpair failed and we were unable to recover it. 00:25:40.375 [2024-07-24 18:08:26.565248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.565274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.565401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.565428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.565574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.565600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.565752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.565778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.565946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.565974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.566177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.566203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.566329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.566355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.566513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.566539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.566693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.566719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.566853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.566894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.567879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.567905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.568142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.568326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.568493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.568644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.568820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.568988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.569179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.569387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.569568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.569745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.569917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.569960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.570131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.570303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.570346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.570493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.570521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.570686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.570715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.570885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.570911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.571030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.571054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.571276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.571305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.571448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.571476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.571647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.571674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.571852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.571885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.376 [2024-07-24 18:08:26.572967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.376 [2024-07-24 18:08:26.572996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.376 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.573139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.573165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.573310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.573336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.573476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.573504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.573696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.573724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.573873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.573899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.574048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.574091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.574229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.574257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.574444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.574470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.574616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.574641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.574826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.574877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.575072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.575287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.575467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.575639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.575839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.575972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.576950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.576991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.577158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.577188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.577345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.577546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.577573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.577728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.577757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.577933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.577962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.578125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.578154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.578350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.578376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.578544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.578601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.578815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.578841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.579029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.579057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.377 [2024-07-24 18:08:26.579217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.377 [2024-07-24 18:08:26.579243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.377 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.579374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.579399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.579575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.579619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.579790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.579822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.580021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.580048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.580262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.580290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.580445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.580473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.580646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.580676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.580872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.580898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.581085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.581120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.581296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.581323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.581505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.581531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.581681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.581707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.581853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.581886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.582083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.657 [2024-07-24 18:08:26.582115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.657 qpair failed and we were unable to recover it. 00:25:40.657 [2024-07-24 18:08:26.582240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.582270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.582427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.582453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.582619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.582648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.582858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.582886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.583111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.583279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.583437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.583626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.583825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.583996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.584163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.584369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.584595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.584747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.584970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.584999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.585160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.585189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.585359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.585388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.585579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.585604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.585756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.585798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.585941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.585970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.586165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.586357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.586505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.586657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.586832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.586981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.587122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.587308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.587535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.587740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.587892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.587917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.588122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.588164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.588318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.588345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.588520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.588546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.588721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.588747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.588938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.588967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.589143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.589169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.589320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.658 [2024-07-24 18:08:26.589345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.658 qpair failed and we were unable to recover it. 00:25:40.658 [2024-07-24 18:08:26.589516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.589544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.589706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.589734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.589869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.589897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.590067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.590097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.590252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.590282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.590452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.590508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.590698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.590726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.590883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.590909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.591055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.591096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.591269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.591297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.591461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.591490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.591636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.591663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.591806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.591848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.592035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.592063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.592244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.592271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.592405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.592431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.592606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.592650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.592857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.592883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.593056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.593082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.593224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.593250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.593379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.593405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.593554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.593596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.593758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.593786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.594919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.594945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.595120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.595164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.595323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.595355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.595509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.595535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.595666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.595706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.595845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.595873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.596024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.596050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.596204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.596229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.596381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.596425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.596603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.659 [2024-07-24 18:08:26.596629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.659 qpair failed and we were unable to recover it. 00:25:40.659 [2024-07-24 18:08:26.596779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.596820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.596955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.596981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.597139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.597292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.597464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.597847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.597999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.598024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.598200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.598242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.598448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.598474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.598660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.598859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.598887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.599078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.599111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.599288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.599313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.599465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.599490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.599641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.599667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.599874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.599902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.600084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.600293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.600471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.600652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.600827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.600993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.601021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.601189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.601219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.601375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.601403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.601570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.601595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.601768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.601829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.601977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.602173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.602367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.602527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.602723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.602887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.602916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.603122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.603289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.603454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.603670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.660 [2024-07-24 18:08:26.603837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.660 qpair failed and we were unable to recover it. 00:25:40.660 [2024-07-24 18:08:26.603980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.604021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.604224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.604250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.604392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.604418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.604566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.604593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.604776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.604827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.604997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.605026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.605216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.605245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.605444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.605470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.605611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.605663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.605836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.605866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.606026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.606054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.606236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.606262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.606417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.606640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.606668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.606836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.606864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.607933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.607959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.608143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.608170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.608316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.608346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.608514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.608540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.608666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.608707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.608895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.608923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.609100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.609134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.609283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.609309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.609440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.609467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.609622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.609663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.609841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.609866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.610042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.610067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.610270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.610299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.610489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.610701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.610730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.610926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.610952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.611146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.611175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.611306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.661 [2024-07-24 18:08:26.611335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.661 qpair failed and we were unable to recover it. 00:25:40.661 [2024-07-24 18:08:26.611503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.611532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.611741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.611767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.611916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.611944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.612081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.612122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.612303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.612328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.612476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.612501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.612644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.612669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.612850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.612878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.613964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.613990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.614196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.614226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.614392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.614421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.614580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.614608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.614797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.614823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.614979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.615209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.615357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.615530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.615717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.615904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.615932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.616095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.616137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.616304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.616334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.616487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.616512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.616681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.616710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.616874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.616902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.617063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.617091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.617278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.617304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.662 [2024-07-24 18:08:26.617466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.662 [2024-07-24 18:08:26.617491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.662 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.617620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.617645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.617831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.617856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.618957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.618985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.619142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.619168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.619299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.619325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.619455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.619497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.619628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.619656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.619816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.619844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.620903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.621112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.621140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.621317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.621346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.621495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.621521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.621673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.621700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.621893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.621918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.622066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.622091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.622224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.622250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.622425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.622451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.622629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.622658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.622816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.622845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.623970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.623999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.624179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.624205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.624359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.624385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.663 [2024-07-24 18:08:26.624562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.663 [2024-07-24 18:08:26.624587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.663 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.624752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.624804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.624969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.624997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.625184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.625213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.625383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.625409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.625581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.625623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.625772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.625798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.625951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.626106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.626132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.626281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.626307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.626435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.626476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.626674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.626703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.626874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.626900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.627095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.627304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.627477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.627654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.627830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.627999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.628163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.628372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.628566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.628769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.628945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.628971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.629136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.629166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.629317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.629490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.629518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.629682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.629710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.629875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.629900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.630094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.630128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.630293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.630322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.630487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.630515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.630710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.630736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.630903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.630932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.631118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.631147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.631301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.631326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.631452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.631477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.631632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.631657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.664 [2024-07-24 18:08:26.631812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.664 [2024-07-24 18:08:26.631854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.664 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.632019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.632047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.632215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.632241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.632394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.632420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.632562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.632588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.632774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.632802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.633841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.633867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.634959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.634987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.635132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.635159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.635319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.635362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.635525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.635554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.635684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.635712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.635905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.635930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.636100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.636135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.636277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.636306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.636449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.636478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.636626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.636652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.636809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.636853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.637046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.637252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.637450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.637629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.637830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.637997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.638179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.638337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.638547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.638718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.665 [2024-07-24 18:08:26.638941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.665 [2024-07-24 18:08:26.638969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.665 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.639124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.639150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.639292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.639318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.639518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.639546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.639710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.639735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.639926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.639954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.640113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.640139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.640282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.640308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.640482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.640507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.640715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.640745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.640919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.640947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.641186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.641216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.641365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.641391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.641517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.641542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.641697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.641726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.641879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.641905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.642054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.642084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.642230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.642258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.642438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.642463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.642613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.642638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.642813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.642839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.643032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.643228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.643423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.643622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.643788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.643986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.644148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.644190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.644366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.644392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.644569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.644595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.644796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.644825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.645017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.645045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.645190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.666 [2024-07-24 18:08:26.645216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.666 qpair failed and we were unable to recover it. 00:25:40.666 [2024-07-24 18:08:26.645347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.645372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.645539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.645569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.645759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.645787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.645952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.645977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.646125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.646294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.646319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.646468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.646511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.646685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.646711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.646883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.646911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.647109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.647153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.647285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.647310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.647463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.647489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.647695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.647750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.647925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.647950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.648078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.648108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.648331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.648357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.648500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.648529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.648684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.648710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.648855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.648880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.649046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.649072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.649290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.649320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.649492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.649517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.649664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.649690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.649878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.649904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.650073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.650115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.650325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.650350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.650475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.650641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.650667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.650794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.650836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.651904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.651929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.652096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.652147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.652280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.652305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.652431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.652456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.667 qpair failed and we were unable to recover it. 00:25:40.667 [2024-07-24 18:08:26.652584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.667 [2024-07-24 18:08:26.652610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.652827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.652852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.653076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.653256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.653424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.653625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.653805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.653973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.654172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.654361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.654550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.654743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.654953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.654978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.655128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.655159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.655298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.655327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.655465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.655494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.655630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.655658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.655832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.655857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.656893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.656918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.657071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.657097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.657230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.657256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.657402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.657428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.657620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.657646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.657818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.657859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.658022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.658050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.658225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.658251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.658419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.658447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.658647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.658673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.658841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.658867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.659016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.659041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.659175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.659201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.659351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.659394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.659564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.668 [2024-07-24 18:08:26.659589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.668 qpair failed and we were unable to recover it. 00:25:40.668 [2024-07-24 18:08:26.659762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.659791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.659962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.659990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.660122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.660151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.660332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.660358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.660509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.660567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.660737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.660765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.660959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.660984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.661142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.661169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.661296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.661321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.661493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.661536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.661676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.661705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.661874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.661900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.662092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.662127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.662272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.662300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.662474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.662499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.662670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.662696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.662867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.662900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.663090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.663126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.663287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.663315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.663510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.663669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.663697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.663865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.663894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.664030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.664058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.664235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.664261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.664412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.664639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.664664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.664790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.664815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.665030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.665230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.665424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.665650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.665824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.665992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.666199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.666374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.666582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.666774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.666945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.669 [2024-07-24 18:08:26.666986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.669 qpair failed and we were unable to recover it. 00:25:40.669 [2024-07-24 18:08:26.667169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.667212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.667388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.667413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.667561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.667759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.667787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.667986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.668164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.668349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.668504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.668720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.668900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.668925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.669118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.669145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.669273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.669300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.669451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.669491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.669668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.669693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.669837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.669866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.670054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.670082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.670288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.670317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.670469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.670496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.670650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.670676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.670833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.670859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.671056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.671234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.671424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.671603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.671838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.671989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.672197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.672356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.672532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.672700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.672882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.672924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.673067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.673096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.673264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.673293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.673476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.673502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.673649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.673675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.673856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.673900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.674053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.674078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.674235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.670 [2024-07-24 18:08:26.674261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.670 qpair failed and we were unable to recover it. 00:25:40.670 [2024-07-24 18:08:26.674391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.674418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.674546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.674573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.674780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.674805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.674954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.674980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.675148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.675177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.675308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.675336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.675481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.675510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.675677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.675702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.675862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.675895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.676064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.676238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.676461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.676657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.676854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.676977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.677003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.677184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.677210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.677382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.677407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.677558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.677601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.677749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.677779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.677974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.678149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.678357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.678508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.678695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.678846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.678872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.679052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.679081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.679238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.679264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.679435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.679461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.679648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.679677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.679843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.679873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.680043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.680071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.680238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.680264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.680431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.680459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.671 qpair failed and we were unable to recover it. 00:25:40.671 [2024-07-24 18:08:26.680624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.671 [2024-07-24 18:08:26.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.680833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.680859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.681947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.681975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.682131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.682160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.682324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.682352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.682526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.682552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.682702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.682728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.682927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.682956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.683126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.683155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.683349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.683375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.683567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.683595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.683765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.683793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.683934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.683962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.684136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.684162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.684280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.684307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.684479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.684522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.684716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.684741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.684862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.684888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.685082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.685130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.685309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.685335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.685486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.685511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.685686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.685711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.685876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.685905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.686040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.686068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.686253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.686279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.686428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.686454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.686576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.686618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.686806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.686835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.672 qpair failed and we were unable to recover it. 00:25:40.672 [2024-07-24 18:08:26.687939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.672 [2024-07-24 18:08:26.687965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.688090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.688137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.688305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.688333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.688494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.688522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.688692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.688717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.688846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.688875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.689075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.689280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.689459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.689654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.689851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.689997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.690173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.690361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.690548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.690726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.690916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.690942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.691117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.691143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.691268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.691294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.691506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.691531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.691679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.691704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.691869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.691898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.692959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.692987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.693148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.693175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.693308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.693333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.693479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.693504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.693672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.693697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.693828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.693853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.673 qpair failed and we were unable to recover it. 00:25:40.673 [2024-07-24 18:08:26.694929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.673 [2024-07-24 18:08:26.694954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.695152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.695181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.695357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.695382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.695577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.695787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.695812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.695970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.696185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.696339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.696552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.696754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.696907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.696933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.697055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.697081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.697259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.697288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.697429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.697458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.697638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.697664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.697858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.697886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.698838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.698867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.699077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.699285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.699451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.699648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.699824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.699977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.700002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.700140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.700183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.700340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.700366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.700554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.700580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.700807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.700862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.701032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.701061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.701252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.701278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.701437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.701463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.701590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.701620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.701797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.701826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.702021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.674 [2024-07-24 18:08:26.702236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.674 [2024-07-24 18:08:26.702262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.674 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.702418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.702444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.702566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.702592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.702769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.702795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.702958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.702987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.703168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.703194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.703342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.703368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.703496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.703521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.703693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.703719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.703888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.703917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.704075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.704110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.704270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.704296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.704451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.704477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.704647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.704677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.704835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.704865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.705968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.705993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.706146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.706173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.706350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.706379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.706538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.706566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.706717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.706743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.706887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.706913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.707087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.707121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.707315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.707343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.707512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.707538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.707714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.707743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.707887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.707916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.708112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.708141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.708312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.708338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.708488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.708514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.708638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.708663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.708834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.708860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.709061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.709090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.709259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.709285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.675 [2024-07-24 18:08:26.709458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.675 [2024-07-24 18:08:26.709491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.675 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.709694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.709720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.709866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.709891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.710038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.710063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.710248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.710277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.710441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.710642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.710667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.710861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.710890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.711936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.711965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.712146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.712173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.712342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.712371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.712534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.712563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.712752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.712780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.712954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.712979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.713149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.713178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.713327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.713353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.713504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.713530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.713685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.713709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.713857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.713899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.714042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.714071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.714226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.714252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.714406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.714433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.714618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.714881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.714910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.715078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.715112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.715284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.715310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.715518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.715546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.715702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.715730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.715868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.676 [2024-07-24 18:08:26.715897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.676 qpair failed and we were unable to recover it. 00:25:40.676 [2024-07-24 18:08:26.716059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.716265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.716432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.716609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.716795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.716962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.716988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.717142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.717169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.717333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.717361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.717536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.717561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.717688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.717713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.717893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.717922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.718970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.718996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.719172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.719215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.719359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.719388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.719566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.719591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.719740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.719766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.719965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.719994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.720157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.720327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.720498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.720644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.720845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.720992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.721139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.721298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.721494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.721683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.721902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.721928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.722095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.722131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.722306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.722339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.722533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.722561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.722738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.722765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.677 [2024-07-24 18:08:26.722887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.677 [2024-07-24 18:08:26.722913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.677 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.723091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.723138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.723306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.723335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.723486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.723511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.723688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.723717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.723887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.723916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.724098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.724131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.724294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.724319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.724490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.724518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.724692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.724717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.724851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.724877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.725096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.725128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.725261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.725300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.725467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.725494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.725648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.725672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.725826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.725850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.726919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.726944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.727095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.727126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.727319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.727344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.727556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.727608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.727805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.727834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.728040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.728230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.728407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.728586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.728787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.728991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.729213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.729411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.729618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.729767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.729937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.729979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.730149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.730178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.730354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.678 [2024-07-24 18:08:26.730380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.678 qpair failed and we were unable to recover it. 00:25:40.678 [2024-07-24 18:08:26.730509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.730535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.730704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.730735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.730895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.730924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.731110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.731136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.731292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.731318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.731467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.731493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.731642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.731668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.731851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.731879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.732050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.732077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.732271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.732297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.732448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.732490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.732662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.732690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.732864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.732889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.733065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.733272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.733496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.733672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.733847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.733992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.734017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.734196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.734222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.734375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.734402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.734566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.734592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.734782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.734810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.734976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.735179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.735328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.735490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.735707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.735880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.735906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.736059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.736261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.736486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.736656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.736818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.736982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.737010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.737165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.737191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.737315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.737342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.737500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.737526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.679 qpair failed and we were unable to recover it. 00:25:40.679 [2024-07-24 18:08:26.737698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.679 [2024-07-24 18:08:26.737723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.737853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.737879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.738962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.738987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.739162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.739191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.739320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.739348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.739518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.739544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.739667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.739692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.739865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.740885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.741056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.741081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.741226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.741268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.741404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.741432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.741598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.741623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.741819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.741874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.742081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.742124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.742270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.742296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.742447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.742472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.742632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.742661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.742802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.742829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.743263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.743422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.743601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.680 [2024-07-24 18:08:26.743859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.680 qpair failed and we were unable to recover it. 00:25:40.680 [2024-07-24 18:08:26.743982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.744177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.744386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.744582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.744787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.744929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.744972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.745143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.745172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.745307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.745335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.745508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.745533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.745684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.745728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.745866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.745894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.746090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.746452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.746656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.746820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.746995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.747174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.747348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.747565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.747766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.747956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.747984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.748121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.748163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.748343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.748369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.748527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.748552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.748698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.748724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.748902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.748930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.749064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.749092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.749272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.749298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.749431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.749476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.749637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.749665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.749822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.749850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.750111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.750155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.750287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.750312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.750524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.750720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.750748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.750922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.750948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.751117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.681 [2024-07-24 18:08:26.751151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.681 qpair failed and we were unable to recover it. 00:25:40.681 [2024-07-24 18:08:26.751321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.751349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.751528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.751555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.751713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.751739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.751942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.751971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.752131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.752179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.752343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.752369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.752519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.752544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.752699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.752724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.752852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.752877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.753934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.753960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.754153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.754182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.754319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.754348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.754514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.754543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.754712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.754737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.754869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.754911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.755114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.755151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.755299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.755324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.755452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.755477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.755622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.755648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.755857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.755882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.756017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.756042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.756193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.756219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.756372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.756398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.682 [2024-07-24 18:08:26.756573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.682 [2024-07-24 18:08:26.756601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.682 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.756765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.756793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.756970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.756996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.757165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.757194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.757369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.757396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.757567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.757609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.757782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.757808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.758928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.758953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.759134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.759163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.759330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.759356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.759475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.759518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.759667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.759696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.759881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.759910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.760090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.760121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.760300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.760325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.760476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.760504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.760673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.760701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.760875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.760900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.761023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.761064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.761248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.761282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.683 qpair failed and we were unable to recover it. 00:25:40.683 [2024-07-24 18:08:26.761522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.683 [2024-07-24 18:08:26.761550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.761740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.761766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.761933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.761961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.762168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.762194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.762326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.762351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.762496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.762667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.762692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.762883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.762910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.763055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.763081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.763272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.763299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.763476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.763502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.763649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.763693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.763924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.763949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.764165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.764192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.764355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.764396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.764563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.764591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.764757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.764787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.765045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.765253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.765449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.765637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.765830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.765966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.766009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.766172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.766201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.766384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.766410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.684 qpair failed and we were unable to recover it. 00:25:40.684 [2024-07-24 18:08:26.766558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.684 [2024-07-24 18:08:26.766583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.766750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.766779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.766911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.766940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.767091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.767123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.767277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.767302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.767428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.767453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.767619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.767647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.767843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.767869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.768934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.768959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.769114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.769140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.769325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.769356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.769505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.769531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.769698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.769726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.769895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.769924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.770133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.770297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.770441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.770683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.770864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.770990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.771015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.685 [2024-07-24 18:08:26.771183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.685 [2024-07-24 18:08:26.771212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.685 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.771387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.771413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.771545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.771571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.771733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.771758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.771934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.771962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.772962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.772988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.773150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.773176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.773370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.773398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.773587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.773615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.773807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.773835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.774003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.774028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.774212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.774241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.774433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.774465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.774665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.774690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.774819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.774845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.775020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.775045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.775202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.775232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.775391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.775420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.775565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.775591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.686 [2024-07-24 18:08:26.775736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.686 [2024-07-24 18:08:26.775778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.686 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.775944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.775972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.776114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.776143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.776293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.776319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.776466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.776510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.776652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.776680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.776855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.777044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.777070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.777228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.777254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.777426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.777452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.777637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.777663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.777797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.778000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.778028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.778228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.778257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.778411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.778437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.778588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.778848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.778877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.779089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.779124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.779264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.779289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.687 [2024-07-24 18:08:26.779425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.687 [2024-07-24 18:08:26.779450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.687 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.779617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.779645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.779853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.779878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.780041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.780067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.780224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.780250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.780405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.780449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.780578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.780607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.780805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.780830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.781033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.781204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.781428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.781625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.781828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.781998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.782027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.782168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.782197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.782357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.782386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.782561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.782587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.782828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.782880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.783026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.783055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.783217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.783243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.783474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.783499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.783788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.783839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.784016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.784042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.784168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.784194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.784350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.784376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.784547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.784576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.688 [2024-07-24 18:08:26.784736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.688 [2024-07-24 18:08:26.784765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.688 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.784953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.784982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.785217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.785243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.785402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.785428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.785578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.785620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.785811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.785839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.786930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.786958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.787124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.787153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.787321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.787349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.787520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.787546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.787732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.787781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.787918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.787952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.788128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.788154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.788310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.788336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.788506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.788534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.788727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.788755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.788933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.788961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.789136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.789163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.789289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.789333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.789486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.789511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.789633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.689 [2024-07-24 18:08:26.789658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.689 qpair failed and we were unable to recover it. 00:25:40.689 [2024-07-24 18:08:26.789785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.789810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.789990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.790164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.790355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.790515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.790689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.790894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.790937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.791098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.791134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.791325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.791538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.791587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.791767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.791792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.791941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.792154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.792180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.792346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.792371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.792581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.792607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.792726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.792939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.792967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.793133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.793175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.793327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.793353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.793530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.793558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.793707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.793733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.793888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.793932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.794077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.794116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.794255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.794284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.794459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.794485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.794701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.794750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.794942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.794971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.795134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.690 [2024-07-24 18:08:26.795163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.690 qpair failed and we were unable to recover it. 00:25:40.690 [2024-07-24 18:08:26.795351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.795376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.795547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.795575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.795734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.795763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.795901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.795935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.796115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.796141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.796312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.796341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.796486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.796514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.796680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.796708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.796878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.796904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.797096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.797290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.797437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.797615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.797808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.797973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.798001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.798140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.798169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.798366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.798392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.798590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.798640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.798782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.798811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.798985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.799181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.799381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.799578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.799733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.799883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.799908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.800058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.800083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.800221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.800247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.800395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.800423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.800594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.800619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.800830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.800880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.801024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.801057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.801235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.801261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.801392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.801419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.801573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.801599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.691 [2024-07-24 18:08:26.801750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.691 [2024-07-24 18:08:26.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.691 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.801944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.801972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.802169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.802195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.802389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.802417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.802570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.802595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.802754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.802780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.802971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.803147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.803191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.803372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.803398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.803523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.803548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.803706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.803731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.803855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.803880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.804930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.804959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.805138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.805338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.805502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.805698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.805872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.805989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.806162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.806382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.806585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.806760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.806916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.806942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.807117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.807147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.807340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.807366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.807563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.807591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.807729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.807757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.807926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.807952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.808119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.808145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.808295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.808321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.808440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.808467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.808620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.808665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.692 [2024-07-24 18:08:26.808817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.692 [2024-07-24 18:08:26.808843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.692 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.808989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.809014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.809187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.809367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.809392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.809544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.809569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.809799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.809824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.810000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.810028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.810195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.810224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.810397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.810424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.810601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.810651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.810816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.810845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.811083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.811287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.811312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.811466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.811492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.811620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.811645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.811896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.811925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.812922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.812951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.813090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.813138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.813306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.813335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.813503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.813529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.813654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.813698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.813900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.813929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.814120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.814150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.814319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.814345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.814521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.814550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.814712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.814740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.814908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.814937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.815115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.693 [2024-07-24 18:08:26.815141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.693 qpair failed and we were unable to recover it. 00:25:40.693 [2024-07-24 18:08:26.815375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.815404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.815584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.815610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.815758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.815783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.815968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.815994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.816158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.816187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.816348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.816378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.816537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.816566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.816720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.816746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.816878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.816904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.817082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.817132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.817280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.817305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.817450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.817476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.817624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.817666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.817809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.817837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.818036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.818214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.818370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.818613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.818840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.818984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.819200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.819387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.819545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.819738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.819912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.819938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.820086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.820135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.820326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.820355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.820493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.820518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.820767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.820795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.820927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.820955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.821123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.821152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.821307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.821333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.821450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.821476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.821630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.821804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.821851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.822020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.822046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.822244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.822273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.822417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.694 [2024-07-24 18:08:26.822446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.694 qpair failed and we were unable to recover it. 00:25:40.694 [2024-07-24 18:08:26.822621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.822650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.822800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.822826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.822949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.822975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.823134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.823164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.823292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.823320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.823493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.823519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.823694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.823744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.823878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.823906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.824071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.824099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.824254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.824279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.824403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.824428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.824607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.824635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.824805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.824830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.825940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.825969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.826207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.826236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.826379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.826409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.826609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.826635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.826821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.826849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.826983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.827163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.827356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.827517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.827686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.827867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.827915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.828088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.828120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.828290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.828319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.828500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.828525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.828679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.828705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.828828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.828854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.829039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.829064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.829202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.829246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.829409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.829437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.829620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.695 [2024-07-24 18:08:26.829646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.695 qpair failed and we were unable to recover it. 00:25:40.695 [2024-07-24 18:08:26.829789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.829815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.829975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.830002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.830249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.830278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.830437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.830464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.830655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.830684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.830922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.830950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.831093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.831128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.831276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.831302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.831501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.831529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.831689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.831737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.831900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.831928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.832087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.832117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.832246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.832288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.832459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.832488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.832691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.832717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.832945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.832971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.833958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.833984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.834113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.834139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.834284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.834309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.834480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.834509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.834644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.834673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.834815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.834849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.835917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.835946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.836123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.836152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.836275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.836304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.836477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.836503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.836657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.836682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.836834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.836860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.696 qpair failed and we were unable to recover it. 00:25:40.696 [2024-07-24 18:08:26.837039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.696 [2024-07-24 18:08:26.837068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.837270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.837296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.837449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.837477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.837612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.837641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.837807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.837835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.838875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.838900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.839077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.839112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.839256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.839284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.839436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.839461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.839632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.839673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.839874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.839900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.840072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.840108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.840281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.840307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.840610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.840639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.840812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.840837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.840987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.841165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.841312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.841521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.841737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.841892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.841917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.842113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.842142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.842276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.842305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.842435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.842463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.842611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.842640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.842787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.842813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.843009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.843035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.843157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.843183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.697 qpair failed and we were unable to recover it. 00:25:40.697 [2024-07-24 18:08:26.843338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.697 [2024-07-24 18:08:26.843364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.843491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.843516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.843678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.843704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.843856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.843884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.844944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.844973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.845145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.845174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.845364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.845393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.845554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.845579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.845751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.845780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.845935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.845960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.846090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.846123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.846244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.846270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.846445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.846474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.846620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.846649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.846827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.846853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.847028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.847057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.847300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.847326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.847559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.847588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.847751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.847925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.847950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.848971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.848997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.849204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.849234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.849410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.849437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.849593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.849618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.849792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.849818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.849990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.850018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.850181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.850207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.850357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.850382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.698 qpair failed and we were unable to recover it. 00:25:40.698 [2024-07-24 18:08:26.850539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.698 [2024-07-24 18:08:26.850567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.850705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.850734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.850906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.850931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.851058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.851107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.851301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.851329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.851493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.851521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.851692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.851717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.851868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.851912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.852078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.852114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.852279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.852307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.852451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.852477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.852605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.852630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.852883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.852911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.853948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.853974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.854970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.854999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.855203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.855229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.855401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.855430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.855581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.855624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.855785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.855814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.855979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.856172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.856316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.856493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.856670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.856874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.856899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.857072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.857106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.857264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.857290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.857415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.857441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.699 [2024-07-24 18:08:26.857566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.699 [2024-07-24 18:08:26.857592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.699 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.857721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.857762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.857906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.857935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.858126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.858156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.858298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.858323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.858484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.858513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.858690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.858716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.858890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.858916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.859071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.859270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.859448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.859623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.859810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.859977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.860172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.860348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.860537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.860758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.860929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.860958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.861125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.861154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.861308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.861333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.861515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.861557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.861721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.861749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.861919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.861948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.862126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.862153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.862309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.862334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.862482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.862508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.862724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.862750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.862925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.862950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.863118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.863169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.863316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.863341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.863508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.863536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.863677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.863701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.863897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.863925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.864094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.864334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.864363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.864538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.864563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.864704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.864732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.700 [2024-07-24 18:08:26.864892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.700 [2024-07-24 18:08:26.864921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.700 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.865052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.865083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.865267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.865293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.865408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.865434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.865613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.865642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.865815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.865844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.866031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.866252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.866424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.866617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.866796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.866974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.867182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.867351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.867534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.867731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.867894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.867921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.868085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.868119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.868272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.868304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.868580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.868637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.868786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.868814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.868972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.869201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.869391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.869546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.869736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.869911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.869937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.870086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.870116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.870266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.870294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.870526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.870552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.870703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.870728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.870931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.870959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.871115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.871144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.871307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.871335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.871486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.871513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.871677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.871706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.871884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.871910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.872042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.872068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.872198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.872224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.872371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.701 [2024-07-24 18:08:26.872414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.701 qpair failed and we were unable to recover it. 00:25:40.701 [2024-07-24 18:08:26.872546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.872575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.872754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.872780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.872950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.872978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.873118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.873147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.873344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.873386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.873547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.873580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.873748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.873773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.873944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.873973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.874130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.874160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.874330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.874360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.874525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.874551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.874740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.874768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.874948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.874974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.875148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.875174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.875323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.875349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.875507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.875533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.875676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.875702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.875880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.875908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.876902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.876931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.877073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.877124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.877286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.877315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.877491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.877516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.877666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.877709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.877896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.877924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.878136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.878330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.878518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.878666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.878821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.878972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.879001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.879183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.879209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.702 [2024-07-24 18:08:26.879328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.702 [2024-07-24 18:08:26.879354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.702 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.879501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.879529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.879702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.879727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.879873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.879917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.880933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.880961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.881107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.881137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.881367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.881393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.881589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.881617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.881785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.881813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.881952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.881978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.882126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.882169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.882365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.882391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.882538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.882581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.882734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.882759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.882911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.882954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.883083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.883116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.883297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.883323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.883498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.883524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.883696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.883725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.883898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.883927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.884065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.884093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.884300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.884326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.884473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.884498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.884658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.884683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.884808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.884834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.885062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.885087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.885266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.885294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.885457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.885485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.885646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.885675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.885846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.885872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.886034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.886064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.886248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.886274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.886423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.886465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.886668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.886694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.886841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.886869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.887038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.887067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.887224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.703 [2024-07-24 18:08:26.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.703 qpair failed and we were unable to recover it. 00:25:40.703 [2024-07-24 18:08:26.887379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.887404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.887536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.887580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.887742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.887770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.887935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.887963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.888115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.888141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.888290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.888315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.888482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.888510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.888700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.888729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.888878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.888903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.889855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.889884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.890867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.890892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.891090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.891130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.891295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.891323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.891469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.891498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.891664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.891689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.891821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.891864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.892107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.892136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.892301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.892329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.892524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.892550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.892678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.892703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.892854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.892880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.893116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.893145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.893294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.893320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.893466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.893652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.893680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.893866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.893895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.894061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.894091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.894223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.894249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.894425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.894468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.894646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.894672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.894846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.894871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.895003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.895031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.704 [2024-07-24 18:08:26.895189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.704 [2024-07-24 18:08:26.895215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.704 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.895442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.895467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.895610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.895636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.895821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.895861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.896037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.896066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.896219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.896245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.896473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.896499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.896716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.896764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.896908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.896937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.897110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.897140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.897374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.897399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.897538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.897581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.897730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.897756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.897932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.897960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.898920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.898945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.899070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.899096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.899296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.899321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.899502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.899528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.899722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.899750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.899915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.899943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.900157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.900330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.900522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.900691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.900849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.900998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.901152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.901337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.901560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.901729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.901905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.901952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.902929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.902953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.903073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.903098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.903264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.903289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.903467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.705 [2024-07-24 18:08:26.903495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.705 qpair failed and we were unable to recover it. 00:25:40.705 [2024-07-24 18:08:26.903637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.706 [2024-07-24 18:08:26.903665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.706 qpair failed and we were unable to recover it. 00:25:40.706 [2024-07-24 18:08:26.903846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.706 [2024-07-24 18:08:26.903871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.706 qpair failed and we were unable to recover it. 00:25:40.706 [2024-07-24 18:08:26.904000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.904025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.904188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.904216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.904384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.904413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.904656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.904680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.904829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.904854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.905029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.905057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.905267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.905297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.905469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.905495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.905644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.905669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.905898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.905923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.906169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.906325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.906475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.906647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.906814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.906986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.907018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.907145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.907171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.990 [2024-07-24 18:08:26.907346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.990 [2024-07-24 18:08:26.907374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.990 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.907542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.907572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.907745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.907770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.907887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.907912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.908091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.908270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.908470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.908638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.908824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.908975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.909184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.909384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.909577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.909767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.909939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.909964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.910111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.910137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.910329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.910354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.910507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.910548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.910721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.910746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.910901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.910942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.911136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.911303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.911477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.911650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.911823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.911992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.912207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.912357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.912564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.912751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.912927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.912952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.913971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.913999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.914170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.914196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.991 [2024-07-24 18:08:26.914348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.991 [2024-07-24 18:08:26.914374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.991 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.914527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.914556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.914718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.914743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.914898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.914923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.915077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.915107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.915280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.915305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.915454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.915496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.915687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.915715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.915869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.915894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.916020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.916046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.916222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.916251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.916420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.916445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.916618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.916643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.916857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.916907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.917906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.917936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.918132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.918159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.918299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.918328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.918482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.918511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.918720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.918745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.918875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.918902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.919099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.919294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.919490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.919697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.919856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.919983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.920008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.920139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.920299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.920325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.920471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.920497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.992 qpair failed and we were unable to recover it. 00:25:40.992 [2024-07-24 18:08:26.920693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.992 [2024-07-24 18:08:26.920721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.920929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.920957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.921182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.921208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.921337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.921362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.921509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.921536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.921687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.921717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.921865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.921890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.922930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.922970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.923212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.923240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.923408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.923433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.923612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.923640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.923802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.923830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.923977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.924208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.924386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.924603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.924780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.924962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.924987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.925157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.925364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.925389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.925514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.925541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.925693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.925719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.925916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.925944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.926091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.926125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.926317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.926345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.926488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.926515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.926666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.926708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.926898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.927091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.927127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.927299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.927325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.927503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.927549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.927794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.927845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.927979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.928008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.993 [2024-07-24 18:08:26.928175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.993 [2024-07-24 18:08:26.928203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.993 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.928324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.928350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.928522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.928550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.928739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.928768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.928942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.928968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.929140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.929166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.929355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.929384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.929544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.929572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.929753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.929778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.929930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.929959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.930094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.930143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.930298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.930324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.930476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.930501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.930652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.930677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.930872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.930897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.931905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.931931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.932137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.932167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.932333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.932358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.932480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.932506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.932657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.932686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.932859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.932908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.933068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.933096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.933265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.933295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.933437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.933462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.933624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.933666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.933836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.933865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.934949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.994 [2024-07-24 18:08:26.934978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.994 qpair failed and we were unable to recover it. 00:25:40.994 [2024-07-24 18:08:26.935193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.935220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.935353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.935378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.935533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.935561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.935728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.935754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.935916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.935944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.936129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.936156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.936322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.936347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.936496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.936521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.936671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.936696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.936873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.936917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.937055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.937082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.937259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.937285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.937410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.937453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.937648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.937676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.937809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.938938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.938963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.939139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.939168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.939347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.939373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.939527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.939552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.939697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.939722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.939937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.939962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.940124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.940150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.940276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.940302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.940471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.940506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.940675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.940703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.940877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.995 [2024-07-24 18:08:26.940903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.995 qpair failed and we were unable to recover it. 00:25:40.995 [2024-07-24 18:08:26.941076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.941114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.941282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.941311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.941482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.941510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.941666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.941691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.941841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.941867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.942959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.942988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.943161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.943191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.943343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.943369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.943520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.943546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.943669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.943694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.943820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.943845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.944937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.944963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.945183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.945212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.945356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.945592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.945618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.945790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.945819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.945997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.946199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.946365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.946513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.946727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.946900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.946925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.947282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.947499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.947666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.947814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.947974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.948018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.996 qpair failed and we were unable to recover it. 00:25:40.996 [2024-07-24 18:08:26.948194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.996 [2024-07-24 18:08:26.948220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.948347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.948373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.948560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.948585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.948870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.948933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.949093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.949127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.949262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.949291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.949460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.949487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.949613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.949654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.949819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.949847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.950037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.950065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.950276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.950302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.950445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.950474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.950639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.950668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.950834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.950862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.951012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.951039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.951198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.951239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.951405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.951434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.951587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.951613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.951788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.951814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.952028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.952184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.952385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.952585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.952782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.952995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.953020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.953187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.953216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.953392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.953417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.953616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.953649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.953801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.953827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.953979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.954019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.954193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.954219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.954386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.954415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.954595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.954620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.954771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.954812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.955002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.955030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.955211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.955238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.955387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.955429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.955605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.955631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.997 [2024-07-24 18:08:26.955779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.997 [2024-07-24 18:08:26.955805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.997 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.955927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.955953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.956099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.956146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.956284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.956312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.956486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.956512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.956641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.956844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.956873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.957062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.957283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.957431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.957815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.957993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.958018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.958196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.958225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.958396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.958422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.958598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.958624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.958800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.958826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.958981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.959168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.959334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.959510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.959679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.959899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.959924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.960971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.960999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.961199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.961225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.961353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.961378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.961522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.961547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.961693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.961842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.961867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.962028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.962054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.962210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.962236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.962386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.962412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.962550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.962578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.998 [2024-07-24 18:08:26.962754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.998 [2024-07-24 18:08:26.962779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.998 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.962942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.962967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.963116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.963142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.963273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.963299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.963473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.963516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.963655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.963683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.963861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.963886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.964041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.964066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.964219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.964245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.964446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.964474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.964644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.964669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.964866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.964894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.965936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.965962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.966169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.966195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.966347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.966397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.966559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.966587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.966728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.966757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.966927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.966953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.967132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.967161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.967294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.967323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.967488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.967517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.967678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.967704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.967897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.967925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.968056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.968084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.968266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.968291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.968462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.968708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.968760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.968940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.968965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.969139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.969168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:40.999 qpair failed and we were unable to recover it. 00:25:40.999 [2024-07-24 18:08:26.969344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.999 [2024-07-24 18:08:26.969370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.969514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.969556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.969695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.969724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.969857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.969885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.970901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.970927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.971077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.971130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.971286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.971312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.971442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.971467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.971635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.971663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.971846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.971874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.972911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.972938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.973091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.973307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.973474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.973699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.973877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.973999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.974190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.974389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.974539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.974758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.974970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.974998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.975192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.975218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.975367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.975394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.975516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.975542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.975730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.975759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.975916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.000 [2024-07-24 18:08:26.975941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.000 qpair failed and we were unable to recover it. 00:25:41.000 [2024-07-24 18:08:26.976072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.976098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.976235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.976261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.976435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.976463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.976615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.976641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.976797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.976840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.977902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.977931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.978109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.978137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.978284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.978313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.978472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.978501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.978667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.978693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.978866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.978892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.979919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.979944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.980130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.980156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.980284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.980309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.980481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.980510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.980648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.980676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.980818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.980846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.981020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.981045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.981213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.981243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.981397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.981423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.981590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.981631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.981809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.981835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.982061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.982240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.982396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.982642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.982864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.982985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.001 [2024-07-24 18:08:26.983010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.001 qpair failed and we were unable to recover it. 00:25:41.001 [2024-07-24 18:08:26.983175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.983205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.983372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.983398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.983521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.983564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.983742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.983767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.983895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.983921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.984098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.984128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.984309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.984339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.984508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.984534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.984657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.984682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.984834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.984859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.985917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.986136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.986165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.986346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.986371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.986511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.986541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.986682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.986711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.986831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.986857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.987875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.987903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.988927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.988956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.989161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.989187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.989320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.989361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.989520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.989549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.989714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.989742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.989891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.989918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.002 [2024-07-24 18:08:26.990072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.002 [2024-07-24 18:08:26.990121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.002 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.990313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.990341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.990485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.990514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.990667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.990692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.990819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.991937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.991965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.992136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.992162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.992289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.992332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.992499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.992524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.992672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.992698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.992857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.993033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.993076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.993221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.993249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.993411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.993439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.993639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.993664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.993795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.993824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.994967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.994996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.995158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.995184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.995315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.995356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.995504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.995533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.995698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.995726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.995901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.995927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.996045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.996087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.996244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.996273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.996439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.996467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.996665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.996691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.996856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.996885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.997059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.997085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.997218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.003 [2024-07-24 18:08:26.997244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.003 qpair failed and we were unable to recover it. 00:25:41.003 [2024-07-24 18:08:26.997396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.997421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.997570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.997595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.997735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.997764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.997967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.997995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.998165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.998192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.998348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.998374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.998540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.998566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.998717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.998742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.998928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.998953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.999123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.999152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.999311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.999343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.999500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.999526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.999670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.999696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:26.999866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:26.999892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.000083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.000117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.000280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.000308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.000505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.000531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.000724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.000752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.000886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.000915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.001093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.001273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.001461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.001669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.001838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.001995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.002021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.002147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.002190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.002378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.002406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.002569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.002598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.002789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.002815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.002969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.003012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.003178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.003338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.004 [2024-07-24 18:08:27.003365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.004 qpair failed and we were unable to recover it. 00:25:41.004 [2024-07-24 18:08:27.003535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.003560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.003712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.003738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.003909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.003950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.004083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.004118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.004254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.004279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.004428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.004475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.004649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.004678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.004843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.005050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.005208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.005433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.005643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.005842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.005969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.006180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.006374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.006544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.006718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.006915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.006943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.007141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.007170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.007365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.007390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.007522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.007563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.007730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.007756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.007896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.007925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.008153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.008180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.008330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.008355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.008538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.008567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.008712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.008740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.008915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.009113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.009142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.009311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.009339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.009468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.009496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.009645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.009670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.009805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.009831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.010046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.010072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.010199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.010225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.010367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.010393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.005 [2024-07-24 18:08:27.010519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.005 [2024-07-24 18:08:27.010544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.005 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.010661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.010686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.010837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.010862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.011032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.011194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.011397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.011590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.011806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.011993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.012022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.012155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.012189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.012367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.012395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.012565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.012590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.012761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.012789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.012977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.013153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.013330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.013506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.013708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.013876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.013904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.014093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.014275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.014447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.014629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.014826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.014987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.015215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.015383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.015587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.015752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.015937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.015965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.016109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.016153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.016278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.016304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.016428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.016469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.016628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.016655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.016805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.017014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.017039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.017169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.017195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.017329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.017355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.017508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.006 [2024-07-24 18:08:27.017550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.006 qpair failed and we were unable to recover it. 00:25:41.006 [2024-07-24 18:08:27.017717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.017743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.017937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.017965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.018146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.018172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.018325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.018367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.018543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.018569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.018735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.018764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.018925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.018953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.019129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.019155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.019281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.019306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.019456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.019496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.019663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.019691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.019878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.019903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.020105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.020288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.020480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.020654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.020825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.020991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.021206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.021400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.021608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.021762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.021957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.021983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.022131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.022157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.022319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.022344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.022510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.022552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.022721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.022750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.022887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.022916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.023962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.023988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.024157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.024193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.024366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.024395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.024536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.024561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.024738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.024780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.007 qpair failed and we were unable to recover it. 00:25:41.007 [2024-07-24 18:08:27.024928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.007 [2024-07-24 18:08:27.024961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.025928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.025954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.026134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.026185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.026367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.026392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.026523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.026549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.026724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.026749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.026921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.026949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.027123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.027320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.027505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.027665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.027837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.027991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.028019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.028199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.028226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.028378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.028422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.028601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.028779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.028822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.028984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.029185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.029391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.029557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.029753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.029904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.029930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.030085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.030278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.030451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.030625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.030827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.030994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.031210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.031357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.031569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.031742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.008 qpair failed and we were unable to recover it. 00:25:41.008 [2024-07-24 18:08:27.031948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.008 [2024-07-24 18:08:27.031973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.032096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.032126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.032260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.032285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.032491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.032523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.032672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.032698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.032844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.032887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.033053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.033081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.033249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.033274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.033452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.033477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.033672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.033720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.033885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.033913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.034937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.034965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.035099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.035134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.035296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.035322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.035516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.035542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.035685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.035711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.035845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.035886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.036064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.036089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.036258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.036283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.036406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.036432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.036612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.036653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.036838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.036866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.037052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.037080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.037260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.037286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.037414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.037439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.037616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.037646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.037818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.037847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.038022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.009 [2024-07-24 18:08:27.038047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.009 qpair failed and we were unable to recover it. 00:25:41.009 [2024-07-24 18:08:27.038180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.038206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.038392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.038417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.038593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.038761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.038787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.038956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.038984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.039177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.039205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.039363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.039389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.039539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.039564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.039688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.039714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.039876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.039904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.040048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.040077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.040269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.040295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.040497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.040525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.040667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.040696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.040859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.040886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.041126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.041170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.041319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.041345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.041525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.041551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.041706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.041747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.041924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.041950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.042148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.042176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.042329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.042358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.042526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.042551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.042726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.042751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.042921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.042949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.043158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.043188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.043387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.043416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.043616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.043642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.043777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.043805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.043970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.043998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.044158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.044186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.044338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.044364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.044561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.044590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.044757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.044786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.044988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.045013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.045156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.045182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.045305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.045331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.045460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.010 [2024-07-24 18:08:27.045486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.010 qpair failed and we were unable to recover it. 00:25:41.010 [2024-07-24 18:08:27.045633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.045666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.045866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.045892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.046085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.046295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.046479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.046647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.046804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.046987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.047013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.047172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.047214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.047397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.047423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.047608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.047658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.047821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.047850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.048938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.048967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.049127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.049170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.049298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.049323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.049492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.049520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.049689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.049715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.049831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.049856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.050076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.050260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.050410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.050606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.050806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.050999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.051179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.051335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.051507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.051696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.051891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.051916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.052032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.052057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.052242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.052271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.052460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.052488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.052658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.011 [2024-07-24 18:08:27.052683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.011 qpair failed and we were unable to recover it. 00:25:41.011 [2024-07-24 18:08:27.052812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.052837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.053033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.053061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.053242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.053269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.053450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.053476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.053714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.053768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.053939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.053967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.054154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.054182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.054372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.054398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.054580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.054630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.054786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.055900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.055926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.056957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.056985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.057140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.057167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.057320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.057346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.057518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.057547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.057765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.057794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.057961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.057990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.058180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.058207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.058336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.058361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.058539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.058709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.058742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.058929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.058957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.059128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.059171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.059320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.059346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.059518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.059547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.059727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.059753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.059872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.059898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.060026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.060051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.012 qpair failed and we were unable to recover it. 00:25:41.012 [2024-07-24 18:08:27.060214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.012 [2024-07-24 18:08:27.060240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.060389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.060414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.060587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.060641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.060862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.060890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.061046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.061073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.061253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.061279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.061432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.061461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.061642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.061668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.061820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.061862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.062946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.062972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.063154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.063182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.063345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.063373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.063569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.063595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.063728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.063754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.063904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.063950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.064131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.064157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.064308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.064333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.064516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.064571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.064745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.064773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.064939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.064967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.065149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.065184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.065360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.065403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.065542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.065571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.065704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.065732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.065898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.065923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.066056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.066082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.066275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.066303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.013 [2024-07-24 18:08:27.066459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.013 [2024-07-24 18:08:27.066485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.013 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.066636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.066662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.066805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.066833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.067923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.067949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.068106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.068132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.068278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.068306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.068449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.068479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.068627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.068655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.068828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.068854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.069855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.069881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.070028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.070053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.070191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.070217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.070369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.070394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.070607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.070635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.070799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.070827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.071946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.071971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.072143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.072172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.072306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.072335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.072534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.072560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.072750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.072799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.072961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.072989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.073124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.073153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.073306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.073332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.073530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.014 [2024-07-24 18:08:27.073584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.014 qpair failed and we were unable to recover it. 00:25:41.014 [2024-07-24 18:08:27.073756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.073782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.073909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.073935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.074061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.074087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.074296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.074324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.074497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.074523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.074654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.074679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.074824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.074849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.075936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.075979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.076113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.076313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.076338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.076458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.076662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.076691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.076885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.076913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.077109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.077152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.077279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.077305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.077470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.077649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.077675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.077828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.077854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.078050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.078259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.078460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.078637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.078813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.078992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.079017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.079168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.079196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.079373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.079399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.079575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.079604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.079778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.079804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.079958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.080002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.080169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.080195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.080368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.080396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.080555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.015 [2024-07-24 18:08:27.080584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.015 qpair failed and we were unable to recover it. 00:25:41.015 [2024-07-24 18:08:27.080773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.080801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.080951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.081962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.081987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.082205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.082402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.082427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.082572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.082600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.082785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.082814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.083031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.083219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.083438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.083650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.083828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.083982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.084181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.084367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.084576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.084761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.084957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.084986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.085119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.085148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.085338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.085366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.085543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.085568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.085697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.085723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.085876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.085902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.086086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.086120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.086261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.086286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.086408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.086434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.086612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.086797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.086826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.087044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.087289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.087484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.087653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.016 [2024-07-24 18:08:27.087820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.016 qpair failed and we were unable to recover it. 00:25:41.016 [2024-07-24 18:08:27.087948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.087991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.088134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.088163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.088316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.088341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.088515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.088541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.088678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.088707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.088894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.088922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.089099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.089130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.089307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.089333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.089485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.089513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.089682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.089708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.089842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.089869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.090868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.090894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.091071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.091266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.091444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.091667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.091835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.091988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.092172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.092351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.092578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.092747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.092947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.092972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.093084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.093133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.093316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.093342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.093469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.093494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.093659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.093685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.093834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.093878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.094083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.094290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.094440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.094658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.094847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.017 [2024-07-24 18:08:27.094994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.017 [2024-07-24 18:08:27.095020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.017 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.095174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.095201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.095353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.095379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.095542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.095570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.095699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.095726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.095875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.095901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.096050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.096075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.096259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.096287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.096449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.096477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.096643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.096669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.096835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.096863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.097029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.097214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.097244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.097421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.097447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.097665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.097714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.097888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.097917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.098068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.098286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.098313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.098474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.098500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.098646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.098672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.098825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.099880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.099906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.100093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.100279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.100478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.100680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.100845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.100992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.018 [2024-07-24 18:08:27.101020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.018 qpair failed and we were unable to recover it. 00:25:41.018 [2024-07-24 18:08:27.101189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.101215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.101345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.101389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.101551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.101579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.101720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.101748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.101914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.101940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.102113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.102285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.102703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.102998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.103173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.103315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.103471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.103696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.103865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.103893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.104966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.104995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.105144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.105170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.105322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.105348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.105530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.105555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.105733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.105784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.105919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.105947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.106126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.106168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.106287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.106313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.106439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.106480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.106643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.106668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.106820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.106845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.107046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.107075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.107248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.107273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.107421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.107451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.107612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.107640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.107863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.107891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.108049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.019 [2024-07-24 18:08:27.108077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.019 qpair failed and we were unable to recover it. 00:25:41.019 [2024-07-24 18:08:27.108236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.108262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.108411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.108436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.108592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.108617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.108800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.108837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.109948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.109977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.110132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.110158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.110291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.110332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.110481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.110511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.110651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.110679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.110849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.110874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.111061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.111266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.111466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.111631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.111784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.111985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.112180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.112355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.112538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.112706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.112885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.112910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.113885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.113910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.114097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.114130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.114270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.114298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.114474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.114500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.114666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.114694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.114858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.115044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.020 [2024-07-24 18:08:27.115070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.020 qpair failed and we were unable to recover it. 00:25:41.020 [2024-07-24 18:08:27.115234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.115260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.115415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.115460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.115594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.115623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.115791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.115819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.115989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.116146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.116379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.116552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.116741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.116915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.116941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.117117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.117147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.117290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.117318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.117521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.117551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.117749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.117775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.117905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.117931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.118060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.118086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.118248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.118274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.118438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.118467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.118655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.118684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.118858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.118884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.119844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.119870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.120927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.120955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.121187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.121343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.121669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.121866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.121989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.021 [2024-07-24 18:08:27.122032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.021 qpair failed and we were unable to recover it. 00:25:41.021 [2024-07-24 18:08:27.122198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.122225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.122411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.122443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.122617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.122642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.122773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.123009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.123157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.123183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.123362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.123387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.123606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.123659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.123828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.124951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.124977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.125110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.125154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.125328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.125354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.125502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.125528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.125697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.125722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.125867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.125893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.126048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.126248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.126424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.126657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.126843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.126994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.127182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.127491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.127671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.127849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.127875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.128971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.128996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.129129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.129155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.022 [2024-07-24 18:08:27.129331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.022 [2024-07-24 18:08:27.129356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.022 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.129512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.129563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.129730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.129758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.129922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.129951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.130108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.130139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.130287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.130329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.130493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.130522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.130651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.130679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.130834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.130859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.131905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.131930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.132950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.132976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.133137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.133166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.133305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.133333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.133489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.133518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.133694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.133719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.133872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.133913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.134070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.134099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.134272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.134300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.134476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.134502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.134655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.134681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.134845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.134873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.135046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.135074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.023 [2024-07-24 18:08:27.135235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.023 [2024-07-24 18:08:27.135260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.023 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.135391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.135433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.135619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.135644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.135798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.135839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.136949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.136991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2892226 Killed "${NVMF_APP[@]}" "$@" 00:25:41.024 [2024-07-24 18:08:27.137134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.137169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.137361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.137390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:41.024 [2024-07-24 18:08:27.137571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.137597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.024 [2024-07-24 18:08:27.137745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.137792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.024 [2024-07-24 18:08:27.137951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.137979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.024 [2024-07-24 18:08:27.138145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.138355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.138380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.138546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.138574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.138708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.138736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.138881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.138910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.139089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.139119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.139275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.139300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.139475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.139503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.139695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.139724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.139928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.139954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.140112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.140160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.140359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.140388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.140528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.140557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.140700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.140726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.140858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.140884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.141069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.141097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.141272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.141300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.141480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.141506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.141703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.141731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 [2024-07-24 18:08:27.141935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.141963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.024 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2892777 00:25:41.024 [2024-07-24 18:08:27.142111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.024 [2024-07-24 18:08:27.142140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.024 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2892777 00:25:41.025 [2024-07-24 18:08:27.142283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.142308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2892777 ']' 00:25:41.025 [2024-07-24 18:08:27.142442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.142468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.025 [2024-07-24 18:08:27.142649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.025 [2024-07-24 18:08:27.142677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.025 [2024-07-24 18:08:27.142814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.142843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.025 [2024-07-24 18:08:27.142988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.025 [2024-07-24 18:08:27.143200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.143366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.143559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.143738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.143914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.143956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.144128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.144173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.144350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.144378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.144553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.144578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.144725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.144754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.144893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.144923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.145118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.145156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.145309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.145334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.145524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.145553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.145717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.145746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.145943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.145969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.146096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.146127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.146250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.146292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.146444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.146469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.146643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.146684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.146887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.146914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.147078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.147113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.147262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.147292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.147487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.147512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.147635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.147661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.147811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.147854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.148023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.148049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.148175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.148201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.148335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.148361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.025 qpair failed and we were unable to recover it. 00:25:41.025 [2024-07-24 18:08:27.148481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.025 [2024-07-24 18:08:27.148507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.148654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.148681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.148801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.148827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.149066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.149271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.149455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.149651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.149846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.149962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.150165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.150337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.150536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.150728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.150904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.150929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.151074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.151099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.151277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.151303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.151428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.151453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.151604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.151633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.151794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.151826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.152027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.152218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.152417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.152635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.152828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.152982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.153147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.153326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.153548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.153700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.153912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.153939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.154060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.154086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.154257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.154283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.154437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.154478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.154626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.154655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.154849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.154878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.155030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.155056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.155175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.155201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.155377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.155406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.026 qpair failed and we were unable to recover it. 00:25:41.026 [2024-07-24 18:08:27.155548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.026 [2024-07-24 18:08:27.155578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.155752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.155777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.155906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.155951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.156117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.156298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.156475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.156678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.156870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.156998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.157172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.157345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.157571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.157726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.157925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.157951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.158079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.158127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.158265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.158293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.158468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.158494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.158642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.158668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.158852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.158878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.159047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.159225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.159409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.159599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.159824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.159979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.160020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.160253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.160281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.160453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.160495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.160642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.160684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.160834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.160876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.161009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.161036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.161183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.161209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.027 qpair failed and we were unable to recover it. 00:25:41.027 [2024-07-24 18:08:27.161342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.027 [2024-07-24 18:08:27.161368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.161520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.161546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.161675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.161701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.161830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.161862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.162852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.162982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.163168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.163323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.163498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.163677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.163857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.163883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.164837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.164862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.165905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.165930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.166932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.166958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.167077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.167109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.167241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.167267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.167415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.167440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.028 [2024-07-24 18:08:27.167590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.028 [2024-07-24 18:08:27.167616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.028 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.167749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.167777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.167926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.168074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.168265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.168446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.168655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.168817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.168993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.169162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.169342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.169512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.169693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.169867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.169893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.170950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.170976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.171154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.171181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.171301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.171329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.171487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.171514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.171666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.171692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.171839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.171865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.172917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.172943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.173096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.173277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.173302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.173483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.173514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.173692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.173718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.173864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.173890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.174021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.174047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.174175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.174200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.174358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.174384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.174558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.174583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.029 [2024-07-24 18:08:27.174710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.029 [2024-07-24 18:08:27.174736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.029 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.174869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.174896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.175958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.175984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.176121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.176147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.176299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.176325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.176453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.176478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.176629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.176655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.176832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.176859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.177846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.177872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.178885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.178912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.179883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.179909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.180941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.180968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.181149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.181297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.181452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.181608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.030 [2024-07-24 18:08:27.181796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.030 qpair failed and we were unable to recover it. 00:25:41.030 [2024-07-24 18:08:27.181927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.181953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.182917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.182943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.183973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.183998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.184178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.184204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.184355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.184381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.184532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.184559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.184678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.184703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.184876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.184902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.185970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.185997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.186153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.186181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.186357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.186384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.186498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.186524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.186680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.186706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.186861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.186887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.187955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.187981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.188146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.188173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.188304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.188330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.188487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.188513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.188666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.188692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.188815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.188840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.031 [2024-07-24 18:08:27.189017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.031 [2024-07-24 18:08:27.189043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.031 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.189222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.189408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.189578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.189758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.189915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.189955] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:41.032 [2024-07-24 18:08:27.190015] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.032 [2024-07-24 18:08:27.190089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.190250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.190405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.190576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.190775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.190954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.190981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.191162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.191189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.191330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.191357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.191509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.191536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.191661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.191863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.191890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.192870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.192896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.193906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.193933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.194937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.032 [2024-07-24 18:08:27.194963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.032 qpair failed and we were unable to recover it. 00:25:41.032 [2024-07-24 18:08:27.195147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.195174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.195318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.195343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.195492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.195519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.195694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.195721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.195832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.195858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.196886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.196912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.197047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.197073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.197275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.197302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.197466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.197492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.197613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.197639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.197871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.197897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.198927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.198953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.199135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.199322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.199478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.199662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.199993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.200019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.200173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.200200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.200384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.200410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.200539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.200565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.200793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.200819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.200976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.201134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.201284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.201458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.201661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.201834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.201861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.202852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.033 [2024-07-24 18:08:27.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.033 qpair failed and we were unable to recover it. 00:25:41.033 [2024-07-24 18:08:27.203001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.203205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.203364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.203520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.203691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.203840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.203872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.204969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.204995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.205144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.205172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.205298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.205325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.205453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.205631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.205657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.205832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.205858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.206968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.206994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.207171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.207518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.207690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.207874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.207997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.208168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.208322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.208506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.208654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.208909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.208936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.209141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.209296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.209446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.209626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.209831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.209981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.210007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.210145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.210173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.210325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.210351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.210495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.034 [2024-07-24 18:08:27.210522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.034 qpair failed and we were unable to recover it. 00:25:41.034 [2024-07-24 18:08:27.210669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.210695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.210845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.210876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.210993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.211154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.211306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.211489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.211665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.211867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.211893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.212106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.212254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.212428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.212682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.212863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.212990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.213192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.213352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.213509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.213657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.213840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.213866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.214904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.214931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.215112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.215139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.215288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.215314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.215477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.215503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.215661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.215688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.215839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.215865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.216865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.216892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.217912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.217942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.218063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.218089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.218229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-07-24 18:08:27.218255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.035 qpair failed and we were unable to recover it. 00:25:41.035 [2024-07-24 18:08:27.218380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.218406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.218527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.218553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.218719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.218746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.218888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.218915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.219921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.219947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.220946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.220972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.221100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.221133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.221289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.221315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.221491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.221517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.221682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.221708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.221864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.221891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.222881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.222908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.223916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.223942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.224099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.224142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.224261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.036 qpair failed and we were unable to recover it. 00:25:41.036 [2024-07-24 18:08:27.224444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-07-24 18:08:27.224470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.224589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.224615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.224752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.224782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.224933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.224960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.225972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.225998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.226131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.226159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.226313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.226340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.226487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.226513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.226663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.226689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.226819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.226846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.227001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.227186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.227378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.037 [2024-07-24 18:08:27.227570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.227728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.227929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.228945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.228971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.229093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.229125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.229257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.037 [2024-07-24 18:08:27.229283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.037 qpair failed and we were unable to recover it. 00:25:41.037 [2024-07-24 18:08:27.229456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.229482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.229657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.229683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.229836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.230898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.230924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.231049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.231077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.231218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.231246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.231406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.231432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.231554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.231582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.231811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.231841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.232867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.232893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.038 [2024-07-24 18:08:27.233061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.038 [2024-07-24 18:08:27.233087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.038 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.233240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.233266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.233394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.233420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.233551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.233577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.233808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.233834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.233965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.233991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.234165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.234193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.234327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.234353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.234532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.234558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.234716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.234741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.234922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.234948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.235117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.235274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.235455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.235662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.235848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.235995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.236169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.236350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.236495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.236704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.236860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.236886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.237023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.237050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.237231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.237258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.237408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.237433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.237592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.237619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.325 [2024-07-24 18:08:27.237798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.325 [2024-07-24 18:08:27.237825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.325 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.237997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.238168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.238348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.238531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.238673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.238849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.238875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.239917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.239943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.240892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.240918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.241081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.241136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.241323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.241350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.241504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.241531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.241664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.241690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.241866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.241892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.242963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.242990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.243171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.243319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.243513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.243690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.243851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.243998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.244024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.244200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.244227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.244353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.326 [2024-07-24 18:08:27.244380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.326 qpair failed and we were unable to recover it. 00:25:41.326 [2024-07-24 18:08:27.244528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.244555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.244709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.244735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.244892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.244919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.245113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.245296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.245467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.245652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.245825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.245983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.246135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.246347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.246496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.246658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.246833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.246859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.247866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.248907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.248934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.249941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.249968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.250118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.250145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.250299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.250326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.250506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.250532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.250675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.250701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.250832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.250859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.327 [2024-07-24 18:08:27.251008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.327 [2024-07-24 18:08:27.251035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.327 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.251189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.251216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.251339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.251365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.251493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.251519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.251659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.251685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.251867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.251893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.252853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.252880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.253891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.253917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.254856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.254883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.255941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.255967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.256129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.256308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.256492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.256673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.256874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.256995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.257022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.257172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.257199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.257355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.328 [2024-07-24 18:08:27.257382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.328 qpair failed and we were unable to recover it. 00:25:41.328 [2024-07-24 18:08:27.257529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.257555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.257709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.257736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.257885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.257911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.258936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.258962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.259904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.259935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.260968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.260993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.261129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.261156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.261289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.261315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.261479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.261506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.261641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.261668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.261822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.261848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.262025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.262051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.262187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.262214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.262332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.329 [2024-07-24 18:08:27.262387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.262412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.262566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.262594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.329 qpair failed and we were unable to recover it. 00:25:41.329 [2024-07-24 18:08:27.262751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.329 [2024-07-24 18:08:27.262778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.262908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.262935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.263087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.263119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.263275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.263301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.263453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.263479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.263659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.263685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.263887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.263914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.264930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.264956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.265139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.265280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.265461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.265665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.265824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.265981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.266965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.266991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.267118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.267145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.267294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.267320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.267475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.267502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.267653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.267680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.267831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.267858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.268856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.268882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.269010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.330 [2024-07-24 18:08:27.269036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.330 qpair failed and we were unable to recover it. 00:25:41.330 [2024-07-24 18:08:27.269167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.269198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.269377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.269403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.269576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.269603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.269726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.269752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.269889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.269916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.270908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.270935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.271124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.271475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.271655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.271829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.271997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.272211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.272364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.272555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.272741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.272887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.272914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.273922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.273948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.274095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.274248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.274431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.274829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.274974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.275000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.275177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.275203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.275324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.275350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.275472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.331 [2024-07-24 18:08:27.275499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.331 qpair failed and we were unable to recover it. 00:25:41.331 [2024-07-24 18:08:27.275677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.275703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.275877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.275903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.276881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.276909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.277967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.277993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.278141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.278168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.278314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.278341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.278469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.278496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.278653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.278679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.278860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.278886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.279923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.279949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.280137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.280290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.280471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.280655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.280825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.280981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.281181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.281372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.281528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.281730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.281933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.281959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.282115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.282142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.332 [2024-07-24 18:08:27.282301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.332 [2024-07-24 18:08:27.282327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.332 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.282453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.282480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.282637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.282663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.282815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.282841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.282991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.283999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.284152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.284339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.284517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.284723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.284877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.284904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.285145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.285321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.285522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.285699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.285880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.285999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.286156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.286332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.286502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.286652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.286857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.286883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.287947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.287974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.288131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.288158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.288312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.288339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.288461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.288489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.288643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.288669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.333 [2024-07-24 18:08:27.288822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.333 [2024-07-24 18:08:27.288849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.333 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.288980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.289185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.289332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.289509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.289716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.289921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.289947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.290960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.290986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.291956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.291982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.292170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.292324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.292542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.292692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.292843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.292992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.293871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.293998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.294025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.294155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.294182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.334 qpair failed and we were unable to recover it. 00:25:41.334 [2024-07-24 18:08:27.294330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.334 [2024-07-24 18:08:27.294357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.294531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.294558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.294706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.294732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.294881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.294907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.295095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.295468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.295615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.295820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.295974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.296176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.296330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.296529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.296742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.296898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.296925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.297922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.297950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.298945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.299117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.299144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.299298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.299324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.299478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.299505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.299627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.299653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.299828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.299855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.300911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.335 [2024-07-24 18:08:27.300938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.335 qpair failed and we were unable to recover it. 00:25:41.335 [2024-07-24 18:08:27.301086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.301303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.301457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.301610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.301765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.301968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.301995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.302146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.302176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.302362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.302389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.302535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.302562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.302729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.302754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.302879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.302905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.303893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.303921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.304098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.304131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.304283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.304309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.304465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.304492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.304647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.304673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.304848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.304874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.305955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.305982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.306320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.306501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.306661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.306809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.306979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.307005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.307159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.307185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.307310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.307336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.307486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.307512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.307662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.336 [2024-07-24 18:08:27.307688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.336 qpair failed and we were unable to recover it. 00:25:41.336 [2024-07-24 18:08:27.307830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.307857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.308908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.308933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.309928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.309954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.310935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.310961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.311960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.311986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.312167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.312312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.312463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.312647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.312853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.312975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.313847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.313997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.314024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.337 [2024-07-24 18:08:27.314177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.337 [2024-07-24 18:08:27.314204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.337 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.314335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.314361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.314499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.314524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.314671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.314698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.314825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.314851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.315894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.315920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.316955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.316981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.317972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.317998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.318130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.318158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.318293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.318319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.318500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.318527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.318677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.318704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.318854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.318881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.319877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.319904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.320080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.320235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.320261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.320421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.320447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.320571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.338 [2024-07-24 18:08:27.320597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.338 qpair failed and we were unable to recover it. 00:25:41.338 [2024-07-24 18:08:27.320778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.320804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.320924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.320951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.321154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.321309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.321509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.321670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.321849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.321999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.322143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.322373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.322551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.322702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.322883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.323925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.323953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.324930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.324957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.325134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.325161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.325286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.325312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.325441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.325468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.325620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.325647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.325825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.325852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.326002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.326029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.326151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.339 [2024-07-24 18:08:27.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.339 qpair failed and we were unable to recover it. 00:25:41.339 [2024-07-24 18:08:27.326309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.326334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.326458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.326485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.326611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.326638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.326760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.326786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.326938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.326965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.327954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.327980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.328945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.328972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.329197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.329224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.329376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.329403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.329581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.329607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.329761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.329789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.329936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.329967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.330125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.330153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.330279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.330306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.330459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.330486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.330635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.330663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.330841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.330868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.331904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.331931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.332089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.332121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.332256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.332285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.332406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.332433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.332590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.332617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.340 [2024-07-24 18:08:27.332770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.340 [2024-07-24 18:08:27.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.340 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.332957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.332984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.333163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.333317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.333499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.333684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.333868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.333991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.334168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.334332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.334541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.334708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.334881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.334908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.335944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.335971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.336147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.336296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.336477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.336659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.336815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.336972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.337992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.338019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.338164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.338192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.338315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.338342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.341 [2024-07-24 18:08:27.338520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.341 [2024-07-24 18:08:27.338546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.341 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.338663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.338690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.338808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.338835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.338980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.339965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.339993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.340148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.340347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.340527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.340680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.340859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.340977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.341159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.341354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.341540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.341723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.341904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.341930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.342084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.342137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.342317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.342344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.342477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.342504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.342664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.342692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.342848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.342875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.343903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.343934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.344934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.342 [2024-07-24 18:08:27.344960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.342 qpair failed and we were unable to recover it. 00:25:41.342 [2024-07-24 18:08:27.345116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.345310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.345490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.345667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.345820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.345965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.345992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.346964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.346991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.347970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.347997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.348148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.348175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.348329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.348356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.348487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.348514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.349315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.349347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.349489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.349517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.350903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.350930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.351082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.351117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.351249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.351276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.351405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.351432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.351591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.351617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.343 [2024-07-24 18:08:27.351752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.343 [2024-07-24 18:08:27.351783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.343 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.351946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.351972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.352116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.352143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.352324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.352350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.352501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.352529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.352652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.352679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.352799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.352825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.353000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.353027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.353180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.353208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.353329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.353357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.354192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.354224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.354356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.354383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.354588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.354615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.354765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.354792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.354939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.354967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.355105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.355134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.355274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.355300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.355463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.355489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.355646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.355673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.355847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.355874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.356889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.356916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.357047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.357074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.357224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.357251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.357394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.357421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.357550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.357577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.344 [2024-07-24 18:08:27.357705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.344 [2024-07-24 18:08:27.357731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.344 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.357883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.358880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.358907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.359913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.359942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.360094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.360129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.360279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.360306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.360459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.360506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.360647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.360674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.360826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.361899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.361926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.362880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.362906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.363064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.363091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.363229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.363256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.363378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.363416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.363567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.345 [2024-07-24 18:08:27.363594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.345 qpair failed and we were unable to recover it. 00:25:41.345 [2024-07-24 18:08:27.363764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.363791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.363980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.364874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.364993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.365175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.365360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.365513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.365695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.365918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.365944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.366948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.366974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.367915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.367941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.368147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.368319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.368482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.368642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.368802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.368973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.369151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.369326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.369521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.369668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.346 [2024-07-24 18:08:27.369890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.346 [2024-07-24 18:08:27.369916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.346 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.370965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.371158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.371186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.371336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.371363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.371514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.371540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.371671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.371697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.371854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.371881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.372932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.372959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.373121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.373148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.373323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.373350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.373504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.373535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.373697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.373723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.373900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.373927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.374079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.374122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.374274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.374301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.374443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.374652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.374862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.374888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.375073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.375266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.375447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.375618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.375822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.375974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.376002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.376140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.347 [2024-07-24 18:08:27.376184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.347 qpair failed and we were unable to recover it. 00:25:41.347 [2024-07-24 18:08:27.376345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.376372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.376501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.376527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.376658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.376685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.376859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.376885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.377036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.377063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.377253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.377281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.377430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.377456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.377615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.377641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.377790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.377817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.378860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.378887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.379890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.379916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.380963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.381944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.381970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.382125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.348 [2024-07-24 18:08:27.382153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b9[2024-07-24 18:08:27.382145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.348 0 with addr=10.0.0.2, port=4420 00:25:41.348 qpair failed and we were unable to recover it. 00:25:41.348 [2024-07-24 18:08:27.382175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.348 [2024-07-24 18:08:27.382191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.348 [2024-07-24 18:08:27.382204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.348 [2024-07-24 18:08:27.382214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.348 [2024-07-24 18:08:27.382308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.382334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.382300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:41.349 [2024-07-24 18:08:27.382330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:41.349 [2024-07-24 18:08:27.382467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.382375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:41.349 [2024-07-24 18:08:27.382497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b9[2024-07-24 18:08:27.382378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:41.349 0 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.382655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.382682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.382818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.382845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.382997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.383163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.383319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.383513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.383670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.383840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.383877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.384896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.384923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.385932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.385959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.386173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.386331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.386496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.386649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.386819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.386978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.387005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.387126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.387161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.387286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.349 [2024-07-24 18:08:27.387314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.349 qpair failed and we were unable to recover it. 00:25:41.349 [2024-07-24 18:08:27.387473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.387499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.387623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.387662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.387796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.387822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.387975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.388160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.388326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.388558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.388708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.388868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.388894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.389876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.389902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.390879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.390905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.391855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.391882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.392864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.392890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.393005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.393030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.350 qpair failed and we were unable to recover it. 00:25:41.350 [2024-07-24 18:08:27.393189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.350 [2024-07-24 18:08:27.393216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.393376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.393510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.393536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.393690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.393716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.393850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.393880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.394846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.394972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.395161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.395312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.395500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.395643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.395908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.395935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.396138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.396291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.396516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.396700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.396870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.396999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.397199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.397358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.397509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.397694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.397847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.397874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.398023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.398051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.398185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.398213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.398364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.398390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.398555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.351 [2024-07-24 18:08:27.398581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.351 qpair failed and we were unable to recover it. 00:25:41.351 [2024-07-24 18:08:27.398728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.398754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.398959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.398985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.399931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.399957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.400934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.400961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.401901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.401928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.402896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.402925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.403865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.403892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.352 [2024-07-24 18:08:27.404846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.352 qpair failed and we were unable to recover it. 00:25:41.352 [2024-07-24 18:08:27.404972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.405958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.405984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.406933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.406959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.407904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.407930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.408863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.408890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.409829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.409855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.353 qpair failed and we were unable to recover it. 00:25:41.353 [2024-07-24 18:08:27.410051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.353 [2024-07-24 18:08:27.410076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.410251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.410295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.410430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.410457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.410610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.410636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.410781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.410807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.410928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.410954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.411939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.411965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.412201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.412354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.412515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.412666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.412854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.412989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.413159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.413344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.413530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.413706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.413888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.413914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.414111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.414143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.414297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.414324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.414461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.414488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.354 [2024-07-24 18:08:27.414677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.354 [2024-07-24 18:08:27.414703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.354 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.414833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.414860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.415836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.415862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.416848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.416977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.417132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.417310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.417476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.417666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.417836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.417862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.418971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.418997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.419955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.419981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.355 [2024-07-24 18:08:27.420186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.355 [2024-07-24 18:08:27.420213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.355 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.420350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.420378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.420517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.420542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.420691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.420717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.420841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.420871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.421941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.421967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.422897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.422923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.423917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.423943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.424893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.424919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.425039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.425065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.425213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.425239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.425371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.425409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.356 [2024-07-24 18:08:27.425558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.356 [2024-07-24 18:08:27.425584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.356 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.425702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.425728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.425877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.425903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.426966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.426992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.427157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.427329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.427506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.427689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.427841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.427974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.428933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.428958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.429132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.429280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.429445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.429676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.429833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.429996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.430971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.430997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.431169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.431196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.431321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.431347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.431519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.431547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.357 [2024-07-24 18:08:27.431674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.357 [2024-07-24 18:08:27.431699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.357 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.431860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.431887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.432968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.432994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.433124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.433157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.433293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.433320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.433465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.433492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.433644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.433681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.433825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.433851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.434889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.435884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.435910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.436035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.436061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.436258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.436285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.358 [2024-07-24 18:08:27.436421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.358 [2024-07-24 18:08:27.436447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.358 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.436571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.436598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.436741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.436767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.436932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.436959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.437946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.437973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.438939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.438965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.439915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.439941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.440884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.440911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.441034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.441061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.359 [2024-07-24 18:08:27.441188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.359 [2024-07-24 18:08:27.441215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.359 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.441350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.441376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.441549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.441730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.441756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.441877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.441903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.442917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.442943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.443122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.443279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.443506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.443662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.443841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.443982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.444132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.444292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.444470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.444619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.444803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.444830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.445905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.446870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.446897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.447060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.447087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.447225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.447252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.447372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.360 [2024-07-24 18:08:27.447398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.360 qpair failed and we were unable to recover it. 00:25:41.360 [2024-07-24 18:08:27.447535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.447562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.447699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.447726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.447849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.447876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.447995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.448958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.448985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.449164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.449312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.449505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.449677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.449856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.449985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.450957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.450983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.451927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.451960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.452912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.452938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.453061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.453086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.453254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.453280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.453398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.453424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.361 [2024-07-24 18:08:27.453556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.361 [2024-07-24 18:08:27.453582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.361 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.453743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.453769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.453899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.453926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.454924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.454950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.455909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.455935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.456921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.456947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.457949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.457975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.458969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.458996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.459153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.459334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.459513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.459665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.459846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.362 qpair failed and we were unable to recover it. 00:25:41.362 [2024-07-24 18:08:27.459975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.362 [2024-07-24 18:08:27.460002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.460928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.460955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.461938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.461964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.462927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.462954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.463892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.463918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.464871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.464897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.465870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.465896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.466038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.363 [2024-07-24 18:08:27.466064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.363 qpair failed and we were unable to recover it. 00:25:41.363 [2024-07-24 18:08:27.466213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.466239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.466402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.466428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.466556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.466581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.466708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.466733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.466856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.466882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.467855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.467981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.468826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.468989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.469953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.469979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.470958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.470984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.364 [2024-07-24 18:08:27.471114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.364 [2024-07-24 18:08:27.471140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.364 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.471260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.471285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.471416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.471442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.471577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.471603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.471732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.471762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.471886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.471912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.472914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.472940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.473865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.473992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.474858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.474978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.475961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.475987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.476900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.476927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.477077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.477109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.477272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.477298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.477410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.477436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.477574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.477600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.365 qpair failed and we were unable to recover it. 00:25:41.365 [2024-07-24 18:08:27.477717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.365 [2024-07-24 18:08:27.477742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.477896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.477922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.478879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.478906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.479942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.479968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.480942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.480969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.481093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.481127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.481260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.481287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.481401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.481427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.481554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.481579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.481811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.481837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.482862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.482888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.483821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.483983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.484009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.484142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.484168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.484290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.366 [2024-07-24 18:08:27.484316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.366 qpair failed and we were unable to recover it. 00:25:41.366 [2024-07-24 18:08:27.484480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.484506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.484631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.484657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.484804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.484835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.484959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.484985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.485316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.485475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.485677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.485839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.485995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.486943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.486969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.487965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.487991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.488860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.488886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.489876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.489992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.490826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.367 [2024-07-24 18:08:27.490986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.367 [2024-07-24 18:08:27.491012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.367 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.491154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.491180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.491332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.491362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.491547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.491573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.491695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.491720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.491851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.491877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.492847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.492873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.493962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.493988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.494114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.494140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.494275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.494300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.494452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.494478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.494655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.494681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.494833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.494859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.495844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.495997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.496024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.496256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.496283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.496437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.496463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.496582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.496609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.496767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.496792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.497018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.497044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.368 [2024-07-24 18:08:27.497215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.368 [2024-07-24 18:08:27.497241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.368 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.497468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.497494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.497617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.497643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.497793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.497818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.497945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.497971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.498950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.498976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.499161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.499309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.499488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.499634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.499991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.500172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.500324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.500484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.500663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.500814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.500841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.501833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.501860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.502836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.502878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.503851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.503878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.504001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.504027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.504190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.504347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.504373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.504497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.369 [2024-07-24 18:08:27.504688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.369 [2024-07-24 18:08:27.504714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.369 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.504870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.504898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.505828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.505975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.506831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.506994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.507936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.507962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.508930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.508956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.509934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.509960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.510930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.510957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.511092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.511126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.511270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.511296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.370 [2024-07-24 18:08:27.511424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.370 [2024-07-24 18:08:27.511450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.370 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.511575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.511607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.511770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.511796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.511966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.511992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.512171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.512329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.512511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.512683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.512844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.512993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.513953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.513980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.514950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.514976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.515892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.515919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.516866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.516892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.517047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.517073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.371 [2024-07-24 18:08:27.517211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.371 [2024-07-24 18:08:27.517237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.371 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.517378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.517404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.517530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.517558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.517713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.517739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.517894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.517920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.518856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.518980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.519007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.519134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.519162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.519292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.519319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.519475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.519518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7a230 (9): Bad file descriptor 00:25:41.372 [2024-07-24 18:08:27.519728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.519777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.519969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.520163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.520348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.520518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.520705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.520855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.520881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.521882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.521908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.522868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.522995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.523035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.523184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.523212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.523347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.372 [2024-07-24 18:08:27.523374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.372 qpair failed and we were unable to recover it. 00:25:41.372 [2024-07-24 18:08:27.523609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.523635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.523807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.523833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.523960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.523986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.524898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.524924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.525913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.525939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.526869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.526895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.527017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.527172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.527332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.527478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 [2024-07-24 18:08:27.527630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.373 [2024-07-24 18:08:27.527785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:41.373 [2024-07-24 18:08:27.527959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.527985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.373 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.373 [2024-07-24 18:08:27.528108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.373 [2024-07-24 18:08:27.528135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.373 qpair failed and we were unable to recover it. 00:25:41.374 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.374 [2024-07-24 18:08:27.528256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.528282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.374 [2024-07-24 18:08:27.528400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.528426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.528549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.528577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.528754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.528780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.528912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.528937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.529957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.529984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.530896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.530925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.531926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.531952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.532913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.532941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.533944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.533970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.534099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.374 [2024-07-24 18:08:27.534130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.374 qpair failed and we were unable to recover it. 00:25:41.374 [2024-07-24 18:08:27.534254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.534281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.534416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.534443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.534567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.534593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.534711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.534737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.534856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.534883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.535918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.535945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.536919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.537970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.537997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.538928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.538955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.539920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.539950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.375 [2024-07-24 18:08:27.540117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.375 [2024-07-24 18:08:27.540155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.375 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.540289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.540315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.540445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.540472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.540587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.540613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.540735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.540762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.540901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.540941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.541937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.541965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.542968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.542994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.543257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.543412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.543553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.543709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.543862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.543990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.544941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.544967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.545943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.545968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.546108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.546134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.546249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.376 [2024-07-24 18:08:27.546275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.376 qpair failed and we were unable to recover it. 00:25:41.376 [2024-07-24 18:08:27.546421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.546448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.546612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.546642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.546763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.546790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.546921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.546947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.547876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.547903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.548863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.548891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.549878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.549999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.377 [2024-07-24 18:08:27.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.377 qpair failed and we were unable to recover it. 00:25:41.377 [2024-07-24 18:08:27.550171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.550212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.378 [2024-07-24 18:08:27.550337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.550376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.550529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:41.378 [2024-07-24 18:08:27.550557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.550691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.550716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.550843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.378 [2024-07-24 18:08:27.550872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.551001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.551027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.378 [2024-07-24 18:08:27.551170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.551197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.551347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.551373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-07-24 18:08:27.551494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-07-24 18:08:27.551520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.551641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.551667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.551833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.551858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.551982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.552867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.552988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.553970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.553997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.554175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.554372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.554537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.554693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.554842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.554975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.555893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.555919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.556864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.556890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.557016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.557171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.557197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.557341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.557366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.557486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.557511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-07-24 18:08:27.557635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-07-24 18:08:27.557662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.557808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.557834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.557952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.557977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.558143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.558183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.558411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.558450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.558610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.558638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.558766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.558793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.558915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.558941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.559891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.559917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.560862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.560888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.561864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.561890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.562868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.562894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.563839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-07-24 18:08:27.563864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-07-24 18:08:27.564002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.564192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.564366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.564550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.564793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.564953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.564980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.565176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.565215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.565349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.565377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.565514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.565540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.565687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.565713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.565855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.565902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.566055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.566095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.566254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.566282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-07-24 18:08:27.566400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-07-24 18:08:27.566432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.566559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.566586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.566737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.566764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.566886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.566914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.567030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.567056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.567183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.567209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.567355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.567381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.567508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.567535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.644 [2024-07-24 18:08:27.567670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.644 [2024-07-24 18:08:27.567696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.644 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.567813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.567840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.567990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.568857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.568991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.569949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.569974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.570956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.570983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.571139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.571317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.571483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.571633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.645 [2024-07-24 18:08:27.571784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.645 qpair failed and we were unable to recover it. 00:25:41.645 [2024-07-24 18:08:27.571907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.571933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.572931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.572962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.573958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.573984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.574124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.574173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.574318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.574347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.574511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.574538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.574663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.574690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.574841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.574867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.575829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.575983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.576009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.646 qpair failed and we were unable to recover it. 00:25:41.646 [2024-07-24 18:08:27.576139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.646 [2024-07-24 18:08:27.576166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.576299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.576324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.576470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.576497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.576616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.576643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.576766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.576798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.576920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.576946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.577941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.577967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.578944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.578970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.579207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.579233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.579382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.579408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.579550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.579575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.579699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.579730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.579884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.579916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.580049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.580076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.580229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.647 [2024-07-24 18:08:27.580256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.647 qpair failed and we were unable to recover it. 00:25:41.647 [2024-07-24 18:08:27.580400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.580426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 Malloc0 00:25:41.648 [2024-07-24 18:08:27.580576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.580602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.580733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.580759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.648 [2024-07-24 18:08:27.580884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:41.648 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.648 [2024-07-24 18:08:27.581143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.648 [2024-07-24 18:08:27.581301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.581452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.581600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.581755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.581935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.581961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.582951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.582977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.583208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.583235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.583389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.583415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.583565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.583591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.583740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.583766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.583928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.583953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 [2024-07-24 18:08:27.584140] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.584914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.584940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.585082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.585115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.648 [2024-07-24 18:08:27.585248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.648 [2024-07-24 18:08:27.585274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.648 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.585392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.585418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.585557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.585582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.585721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.585746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.585913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.585952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.586129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.586302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.586483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.586639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.586821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.586974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.587127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.587303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.587478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.587630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.587813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.587839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.588908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.588935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.589860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.589886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.590958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.590997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.591140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.591170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.591302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.649 [2024-07-24 18:08:27.591328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.649 qpair failed and we were unable to recover it. 00:25:41.649 [2024-07-24 18:08:27.591481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.591506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.591636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.591661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.591785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.591811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.591931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.591957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.592081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.592270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.650 [2024-07-24 18:08:27.592418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.650 [2024-07-24 18:08:27.592575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.650 [2024-07-24 18:08:27.592734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.592903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.592932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.593870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.593896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.594131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.594158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.594280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.594306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.594452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.594478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.594708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.594734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.594857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.594884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.595863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.595990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.596018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.650 qpair failed and we were unable to recover it. 00:25:41.650 [2024-07-24 18:08:27.596150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.650 [2024-07-24 18:08:27.596177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.596331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.596357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.596478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.596505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.596636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.596661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.596810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.596960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.596987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.597948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.597975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.598922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.598949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.599878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.599904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.600038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.600187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.600349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.651 [2024-07-24 18:08:27.600507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.651 [2024-07-24 18:08:27.600661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.651 [2024-07-24 18:08:27.600810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.600836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b9 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.651 0 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.600971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.601143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.601302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.601483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.601632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.601894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.651 [2024-07-24 18:08:27.601920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.651 qpair failed and we were unable to recover it. 00:25:41.651 [2024-07-24 18:08:27.602053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.602847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.602998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.603961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.603987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.604138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.604285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.604438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.604692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.604867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.604991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.605245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.605406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.605588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.605742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.605905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.605933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.606875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.606903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.607849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.607876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.608006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.608034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.652 [2024-07-24 18:08:27.608160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.652 [2024-07-24 18:08:27.608188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.652 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.608341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.608367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.653 [2024-07-24 18:08:27.608513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.608539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.653 [2024-07-24 18:08:27.608677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.653 [2024-07-24 18:08:27.608702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.653 [2024-07-24 18:08:27.608825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.608851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.609925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.609950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f600c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.610901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.610940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f601c000b90 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.611908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.611934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.612058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.612084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.612211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.653 [2024-07-24 18:08:27.612237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6c250 with addr=10.0.0.2, port=4420 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.612581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.653 [2024-07-24 18:08:27.614861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.653 [2024-07-24 18:08:27.615010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.653 [2024-07-24 18:08:27.615037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.653 [2024-07-24 18:08:27.615053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.653 [2024-07-24 18:08:27.615067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.653 [2024-07-24 18:08:27.615112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.653 18:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2892369 00:25:41.653 [2024-07-24 18:08:27.624735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.653 [2024-07-24 18:08:27.624864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.653 [2024-07-24 18:08:27.624890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.653 [2024-07-24 18:08:27.624905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.653 [2024-07-24 18:08:27.624919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.653 [2024-07-24 18:08:27.624948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.653 qpair failed and we were unable to recover it. 00:25:41.653 [2024-07-24 18:08:27.634766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.653 [2024-07-24 18:08:27.634903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.653 [2024-07-24 18:08:27.634930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.653 [2024-07-24 18:08:27.634944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.653 [2024-07-24 18:08:27.634958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.634993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.644843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.645004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.645030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.645045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.645058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.645087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.654768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.654914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.654940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.654954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.654968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.654996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.664771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.664899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.664926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.664941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.664954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.664982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.674787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.674925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.674950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.674965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.674978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.675008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.684902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.685034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.685065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.685080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.685094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.685132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.694943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.695078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.695111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.695129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.695143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.695172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.704893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.705018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.705045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.705060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.705074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.705109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.714877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.715006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.715032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.715047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.715060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.715088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.724919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.725054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.725080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.725094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.725114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.725149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.734942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.735070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.735096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.735119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.735133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.735166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.745000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.745136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.745167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.745182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.745195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.745224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.755031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.654 [2024-07-24 18:08:27.755171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.654 [2024-07-24 18:08:27.755197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.654 [2024-07-24 18:08:27.755212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.654 [2024-07-24 18:08:27.755225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.654 [2024-07-24 18:08:27.755254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.654 qpair failed and we were unable to recover it. 00:25:41.654 [2024-07-24 18:08:27.765024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.765173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.765198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.765213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.765226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.765254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.775069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.775220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.775251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.775267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.775280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.775308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.785099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.785275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.785300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.785315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.785328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.785357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.795158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.795295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.795321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.795336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.795349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.795378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.805177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.805309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.805335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.805349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.805363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.805391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.815182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.815309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.815334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.815349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.815368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.655 [2024-07-24 18:08:27.815397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.655 qpair failed and we were unable to recover it. 00:25:41.655 [2024-07-24 18:08:27.825299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.655 [2024-07-24 18:08:27.825423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.655 [2024-07-24 18:08:27.825449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.655 [2024-07-24 18:08:27.825464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.655 [2024-07-24 18:08:27.825477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.825506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.835282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.835405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.835431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.835445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.835459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.835487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.845260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.845439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.845464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.845479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.845492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.845520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.855311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.855435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.855461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.855475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.855489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.855517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.865358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.865502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.865530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.865550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.865564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.865594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.875383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.875502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.875529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.875543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.875557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.875585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.885364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.885490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.885516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.885530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.885543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.885572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.895414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.895541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.895566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.895581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.895594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.895622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.656 [2024-07-24 18:08:27.905454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.656 [2024-07-24 18:08:27.905638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.656 [2024-07-24 18:08:27.905664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.656 [2024-07-24 18:08:27.905679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.656 [2024-07-24 18:08:27.905697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.656 [2024-07-24 18:08:27.905727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.656 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.915533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.915663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.915688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.915702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.915716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.915744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.925519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.925680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.925705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.925719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.925732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.925760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.935485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.935614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.935639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.935654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.935667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.935696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.945514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.945634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.945660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.945674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.945688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.945715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.955568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.955706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.955732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.955759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.955774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.955804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.965576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.965707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.965733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.965748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.965761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.965789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.916 [2024-07-24 18:08:27.975599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.916 [2024-07-24 18:08:27.975721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.916 [2024-07-24 18:08:27.975746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.916 [2024-07-24 18:08:27.975761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.916 [2024-07-24 18:08:27.975774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.916 [2024-07-24 18:08:27.975803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.916 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:27.985604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:27.985721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:27.985747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:27.985761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:27.985774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:27.985802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:27.995740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:27.995861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:27.995886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:27.995901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:27.995920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:27.995949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.005725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.005864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.005890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.005905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.005918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.005946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.015705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.015825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.015851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.015866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.015879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.015910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.025766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.025897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.025922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.025937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.025951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.025979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.035778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.035924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.035950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.035965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.035981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.036010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.045817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.045944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.045970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.045985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.045998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.046026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.055824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.055943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.055968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.055983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.055996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.056024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.917 qpair failed and we were unable to recover it. 00:25:41.917 [2024-07-24 18:08:28.065914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.917 [2024-07-24 18:08:28.066049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.917 [2024-07-24 18:08:28.066077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.917 [2024-07-24 18:08:28.066096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.917 [2024-07-24 18:08:28.066121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.917 [2024-07-24 18:08:28.066153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.075927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.076088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.076120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.076135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.076148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.076177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.085907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.086085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.086117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.086139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.086153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.086182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.095929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.096050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.096076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.096090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.096110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.096140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.105979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.106127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.106153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.106168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.106181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.106210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.115988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.116119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.116144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.116159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.116173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.116201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.126028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.126174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.126200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.126214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.126227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.126256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.136043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.136172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.136197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.136211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.136225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.136253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.146083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.146225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.146251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.146265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.146278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.918 [2024-07-24 18:08:28.146305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.918 qpair failed and we were unable to recover it. 00:25:41.918 [2024-07-24 18:08:28.156164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.918 [2024-07-24 18:08:28.156319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.918 [2024-07-24 18:08:28.156345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.918 [2024-07-24 18:08:28.156359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.918 [2024-07-24 18:08:28.156373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.919 [2024-07-24 18:08:28.156402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.919 qpair failed and we were unable to recover it. 00:25:41.919 [2024-07-24 18:08:28.166140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.919 [2024-07-24 18:08:28.166279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.919 [2024-07-24 18:08:28.166304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.919 [2024-07-24 18:08:28.166319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.919 [2024-07-24 18:08:28.166332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.919 [2024-07-24 18:08:28.166360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.919 qpair failed and we were unable to recover it. 00:25:41.919 [2024-07-24 18:08:28.176185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:41.919 [2024-07-24 18:08:28.176310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:41.919 [2024-07-24 18:08:28.176335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:41.919 [2024-07-24 18:08:28.176357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:41.919 [2024-07-24 18:08:28.176370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:41.919 [2024-07-24 18:08:28.176399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:41.919 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.186286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-24 18:08:28.186412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-24 18:08:28.186437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-24 18:08:28.186452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-24 18:08:28.186465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.178 [2024-07-24 18:08:28.186493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.196223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-24 18:08:28.196350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-24 18:08:28.196374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-24 18:08:28.196388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-24 18:08:28.196401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.178 [2024-07-24 18:08:28.196428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.206244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-24 18:08:28.206383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-24 18:08:28.206408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-24 18:08:28.206422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-24 18:08:28.206435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.178 [2024-07-24 18:08:28.206464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.216371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-24 18:08:28.216495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-24 18:08:28.216521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-24 18:08:28.216536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-24 18:08:28.216549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.178 [2024-07-24 18:08:28.216577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.226335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.178 [2024-07-24 18:08:28.226497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.178 [2024-07-24 18:08:28.226522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.178 [2024-07-24 18:08:28.226536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.178 [2024-07-24 18:08:28.226550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.178 [2024-07-24 18:08:28.226578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.178 qpair failed and we were unable to recover it. 00:25:42.178 [2024-07-24 18:08:28.236319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.236441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.236466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.236481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.236494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.236522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.246365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.246495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.246521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.246535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.246549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.246578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.256470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.256592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.256617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.256631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.256645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.256673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.266439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.266576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.266601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.266621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.266636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.266664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.276423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.276544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.276570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.276585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.276598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.276626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.286450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.286577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.286602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.286617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.286630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.286659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.296500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.296627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.296652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.296666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.296679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.296708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.306491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.306614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.306639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.306654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.306667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.306695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.316592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.316721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.316747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.316762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.316776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.316804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.326593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.326726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.326751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.326766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.326780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.326808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.336617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.336744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.336769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.336783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.336797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.336825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.346645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.346762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.346787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.346801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.346814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.346842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.356733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.356855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.356885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.179 [2024-07-24 18:08:28.356901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.179 [2024-07-24 18:08:28.356914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.179 [2024-07-24 18:08:28.356942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.179 qpair failed and we were unable to recover it. 00:25:42.179 [2024-07-24 18:08:28.366722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.179 [2024-07-24 18:08:28.366892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.179 [2024-07-24 18:08:28.366916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.366931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.366944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.366972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.376789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.376930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.376956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.376970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.376983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.377011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.386841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.386961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.386986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.387000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.387013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.387041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.396870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.397002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.397027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.397042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.397055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.397084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.406890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.407065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.407091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.407113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.407127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.407156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.416976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.417098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.417134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.417149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.417162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.417193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.426919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.427042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.427068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.427083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.427095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.427131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.180 [2024-07-24 18:08:28.436895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.180 [2024-07-24 18:08:28.437015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.180 [2024-07-24 18:08:28.437040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.180 [2024-07-24 18:08:28.437054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.180 [2024-07-24 18:08:28.437068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.180 [2024-07-24 18:08:28.437096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.180 qpair failed and we were unable to recover it. 00:25:42.439 [2024-07-24 18:08:28.446941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.439 [2024-07-24 18:08:28.447071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.439 [2024-07-24 18:08:28.447111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.439 [2024-07-24 18:08:28.447130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.439 [2024-07-24 18:08:28.447144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.439 [2024-07-24 18:08:28.447173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.439 qpair failed and we were unable to recover it. 00:25:42.439 [2024-07-24 18:08:28.456954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.439 [2024-07-24 18:08:28.457086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.439 [2024-07-24 18:08:28.457119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.439 [2024-07-24 18:08:28.457135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.439 [2024-07-24 18:08:28.457148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.439 [2024-07-24 18:08:28.457177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.439 qpair failed and we were unable to recover it. 00:25:42.439 [2024-07-24 18:08:28.467056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.439 [2024-07-24 18:08:28.467204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.439 [2024-07-24 18:08:28.467229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.439 [2024-07-24 18:08:28.467244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.439 [2024-07-24 18:08:28.467257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.439 [2024-07-24 18:08:28.467288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.439 qpair failed and we were unable to recover it. 00:25:42.439 [2024-07-24 18:08:28.477022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.439 [2024-07-24 18:08:28.477153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.439 [2024-07-24 18:08:28.477179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.439 [2024-07-24 18:08:28.477194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.439 [2024-07-24 18:08:28.477207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.439 [2024-07-24 18:08:28.477236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.439 qpair failed and we were unable to recover it. 00:25:42.439 [2024-07-24 18:08:28.487034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.439 [2024-07-24 18:08:28.487177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.439 [2024-07-24 18:08:28.487203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.439 [2024-07-24 18:08:28.487218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.439 [2024-07-24 18:08:28.487231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.439 [2024-07-24 18:08:28.487266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.439 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.497098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.497294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.497320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.497338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.497351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.497380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.507127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.507254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.507280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.507298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.507311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.507340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.517195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.517316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.517341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.517357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.517370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.517399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.527165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.527294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.527321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.527335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.527349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.527379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.537172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.537327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.537358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.537373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.537387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.537415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.547312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.547438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.547465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.547480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.547493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.547523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.557288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.557444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.557470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.557485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.557498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.557527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.567324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.567494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.567520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.567534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.567547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.567576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.577285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.577409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.577434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.577449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.577462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.577497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.587314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.587436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.587462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.587476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.587490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.587518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.597451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.597578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.597604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.597618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.597632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.597661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.607431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.607557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.607583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.607597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.607610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.607639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.617543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.617695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.440 [2024-07-24 18:08:28.617721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.440 [2024-07-24 18:08:28.617736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.440 [2024-07-24 18:08:28.617749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.440 [2024-07-24 18:08:28.617777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.440 qpair failed and we were unable to recover it. 00:25:42.440 [2024-07-24 18:08:28.627474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.440 [2024-07-24 18:08:28.627651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.627683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.627699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.627712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.627740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.637566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.637687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.637713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.637728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.637741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.637769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.647506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.647629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.647655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.647670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.647683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.647713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.657603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.657729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.657754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.657768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.657782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.657810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.667573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.667690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.667716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.667730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.667743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.667777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.677571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.677695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.677720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.677735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.677748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.677777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.687590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.687718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.687743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.687758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.687771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.687799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.441 [2024-07-24 18:08:28.697631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.441 [2024-07-24 18:08:28.697801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.441 [2024-07-24 18:08:28.697827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.441 [2024-07-24 18:08:28.697841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.441 [2024-07-24 18:08:28.697854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.441 [2024-07-24 18:08:28.697883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.441 qpair failed and we were unable to recover it. 00:25:42.700 [2024-07-24 18:08:28.707650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.700 [2024-07-24 18:08:28.707785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.700 [2024-07-24 18:08:28.707811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.700 [2024-07-24 18:08:28.707825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.700 [2024-07-24 18:08:28.707838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.700 [2024-07-24 18:08:28.707866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.700 qpair failed and we were unable to recover it. 00:25:42.700 [2024-07-24 18:08:28.717704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.700 [2024-07-24 18:08:28.717846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.700 [2024-07-24 18:08:28.717876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.700 [2024-07-24 18:08:28.717891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.700 [2024-07-24 18:08:28.717905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.700 [2024-07-24 18:08:28.717933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.700 qpair failed and we were unable to recover it. 00:25:42.700 [2024-07-24 18:08:28.727782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.700 [2024-07-24 18:08:28.727921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.700 [2024-07-24 18:08:28.727946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.700 [2024-07-24 18:08:28.727961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.700 [2024-07-24 18:08:28.727974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.700 [2024-07-24 18:08:28.728002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.700 qpair failed and we were unable to recover it. 00:25:42.700 [2024-07-24 18:08:28.737752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.700 [2024-07-24 18:08:28.737882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.700 [2024-07-24 18:08:28.737908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.700 [2024-07-24 18:08:28.737922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.700 [2024-07-24 18:08:28.737935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.700 [2024-07-24 18:08:28.737963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.700 qpair failed and we were unable to recover it. 00:25:42.700 [2024-07-24 18:08:28.747788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.747915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.747940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.747955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.747968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.747997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.757797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.757920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.757945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.757960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.757978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.758007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.767835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.767966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.767991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.768006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.768019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.768046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.777898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.778026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.778051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.778067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.778080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.778116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.788039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.788164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.788189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.788204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.788217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.788245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.797922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.798095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.798126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.798141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.798154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.798183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.808094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.808240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.808266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.808280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.808293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.808321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.818069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.818218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.818244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.818263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.818277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.818306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.828023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.828152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.828178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.828193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.828206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.828235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.838031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.838154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.838180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.838195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.838208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.838237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.848076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.848217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.848242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.848257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.848275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.848306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.858187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.858317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.858342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.858357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.858370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.858399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.701 [2024-07-24 18:08:28.868134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.701 [2024-07-24 18:08:28.868257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.701 [2024-07-24 18:08:28.868282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.701 [2024-07-24 18:08:28.868297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.701 [2024-07-24 18:08:28.868311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.701 [2024-07-24 18:08:28.868339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.701 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.878201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.878333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.878359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.878374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.878387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.878415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.888209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.888370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.888395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.888409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.888423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.888451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.898226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.898392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.898418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.898433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.898446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.898475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.908244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.908366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.908391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.908406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.908419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.908447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.918261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.918387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.918412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.918427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.918440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.918468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.928334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.928462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.928487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.928502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.928515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.928543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.938339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.938478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.938504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.938518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.938537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.938566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.948361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.948492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.948517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.948532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.948547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.948576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.702 [2024-07-24 18:08:28.958434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.702 [2024-07-24 18:08:28.958605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.702 [2024-07-24 18:08:28.958630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.702 [2024-07-24 18:08:28.958644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.702 [2024-07-24 18:08:28.958658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.702 [2024-07-24 18:08:28.958686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.702 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:28.968478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:28.968641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:28.968668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:28.968683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:28.968697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.962 [2024-07-24 18:08:28.968726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:28.978459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:28.978593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:28.978618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:28.978633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:28.978646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa6c250 00:25:42.962 [2024-07-24 18:08:28.978675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:28.988522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:28.988675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:28.988707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:28.988724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:28.988737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:28.988769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:28.998555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:28.998699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:28.998727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:28.998742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:28.998756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:28.998786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:29.008658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:29.008846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:29.008874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:29.008889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:29.008902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:29.008933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:29.018578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:29.018703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:29.018730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:29.018745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:29.018758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:29.018802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:29.028595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:29.028721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:29.028748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:29.028769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:29.028783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:29.028815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:29.038647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:29.038773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:29.038800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.962 [2024-07-24 18:08:29.038815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.962 [2024-07-24 18:08:29.038829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.962 [2024-07-24 18:08:29.038860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.962 qpair failed and we were unable to recover it. 00:25:42.962 [2024-07-24 18:08:29.048632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.962 [2024-07-24 18:08:29.048757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.962 [2024-07-24 18:08:29.048783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.048799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.048812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.048856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.058700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.058837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.058863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.058878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.058892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.058923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.068713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.068834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.068861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.068877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.068890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.068933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.078816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.078942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.078969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.078984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.078998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.079029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.088767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.088895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.088922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.088937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.088950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.088981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.098807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.098941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.098967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.098983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.098996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.099027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.108830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.108954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.108981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.108996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.109009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.109040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.118859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.119006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.119037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.119053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.119067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.119097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.128951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.129078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.129111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.129131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.129144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.129175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.138891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.139018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.139045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.139060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.139074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.139114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.149023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.149154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.149181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.149196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.149208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.149238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.158930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.159070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.159097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.159127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.159142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.159179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.168967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.169090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.169122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.169138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.963 [2024-07-24 18:08:29.169151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.963 [2024-07-24 18:08:29.169183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.963 qpair failed and we were unable to recover it. 00:25:42.963 [2024-07-24 18:08:29.179092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.963 [2024-07-24 18:08:29.179247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.963 [2024-07-24 18:08:29.179273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.963 [2024-07-24 18:08:29.179288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.179302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.179332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:42.964 [2024-07-24 18:08:29.189118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.964 [2024-07-24 18:08:29.189246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.964 [2024-07-24 18:08:29.189272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.964 [2024-07-24 18:08:29.189287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.189301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.189331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:42.964 [2024-07-24 18:08:29.199044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.964 [2024-07-24 18:08:29.199168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.964 [2024-07-24 18:08:29.199192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.964 [2024-07-24 18:08:29.199207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.199219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.199249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:42.964 [2024-07-24 18:08:29.209162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.964 [2024-07-24 18:08:29.209303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.964 [2024-07-24 18:08:29.209334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.964 [2024-07-24 18:08:29.209350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.209363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.209394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:42.964 [2024-07-24 18:08:29.219181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.964 [2024-07-24 18:08:29.219310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.964 [2024-07-24 18:08:29.219337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.964 [2024-07-24 18:08:29.219352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.219365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.219396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:42.964 [2024-07-24 18:08:29.229162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:42.964 [2024-07-24 18:08:29.229310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:42.964 [2024-07-24 18:08:29.229338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:42.964 [2024-07-24 18:08:29.229354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:42.964 [2024-07-24 18:08:29.229367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:42.964 [2024-07-24 18:08:29.229414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:42.964 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.239158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.239290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.239320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.239335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.239348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.239380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.249259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.249393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.249420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.249436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.249449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.249498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.259219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.259345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.259372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.259386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.259400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.259432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.269246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.269367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.269393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.269408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.269420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.269450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.279271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.279409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.279435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.279451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.279464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.279493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.289318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.223 [2024-07-24 18:08:29.289443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.223 [2024-07-24 18:08:29.289470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.223 [2024-07-24 18:08:29.289485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.223 [2024-07-24 18:08:29.289499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.223 [2024-07-24 18:08:29.289530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.223 qpair failed and we were unable to recover it. 00:25:43.223 [2024-07-24 18:08:29.299433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.299567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.299599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.299618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.299632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.299665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.309431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.309555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.309581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.309596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.309609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.309641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.319387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.319518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.319545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.319560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.319574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.319604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.329417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.329540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.329566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.329582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.329595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.329627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.339485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.339657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.339683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.339699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.339717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.339749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.349465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.349594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.349620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.349639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.349652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.349683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.359578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.359701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.359726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.359742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.359755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.359785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.369656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.369797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.369823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.369839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.369853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.369883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.379572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.379729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.379755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.379770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.379784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.379813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.389608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.389743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.389768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.389783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.389796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.389829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.399750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.399879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.399904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.399919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.399933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.399963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.409688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.409817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.409844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.409859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.224 [2024-07-24 18:08:29.409873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.224 [2024-07-24 18:08:29.409903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.224 qpair failed and we were unable to recover it. 00:25:43.224 [2024-07-24 18:08:29.419803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.224 [2024-07-24 18:08:29.419940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.224 [2024-07-24 18:08:29.419966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.224 [2024-07-24 18:08:29.419981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.419995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.420025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.429723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.429861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.429887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.429909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.429923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.429954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.439773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.439941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.439966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.439981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.439995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.440027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.449780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.449904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.449930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.449945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.449958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.450001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.459816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.459966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.459991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.460006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.460019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.460050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.469829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.469994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.470020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.470035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.470049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.470093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.479841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.479961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.479987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.480002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.480015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.480045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.225 [2024-07-24 18:08:29.490012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.225 [2024-07-24 18:08:29.490156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.225 [2024-07-24 18:08:29.490186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.225 [2024-07-24 18:08:29.490205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.225 [2024-07-24 18:08:29.490219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.225 [2024-07-24 18:08:29.490253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.225 qpair failed and we were unable to recover it. 00:25:43.484 [2024-07-24 18:08:29.499924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.484 [2024-07-24 18:08:29.500071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.484 [2024-07-24 18:08:29.500099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.484 [2024-07-24 18:08:29.500124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.484 [2024-07-24 18:08:29.500138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.484 [2024-07-24 18:08:29.500170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.484 qpair failed and we were unable to recover it. 00:25:43.484 [2024-07-24 18:08:29.510014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.484 [2024-07-24 18:08:29.510142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.484 [2024-07-24 18:08:29.510169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.484 [2024-07-24 18:08:29.510184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.484 [2024-07-24 18:08:29.510197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.484 [2024-07-24 18:08:29.510230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.484 qpair failed and we were unable to recover it. 00:25:43.484 [2024-07-24 18:08:29.519963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.484 [2024-07-24 18:08:29.520095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.484 [2024-07-24 18:08:29.520132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.520155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.520169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.520203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.530032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.530194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.530220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.530236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.530249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.530280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.540000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.540141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.540168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.540183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.540197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.540227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.550042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.550186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.550213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.550228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.550241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.550273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.560092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.560281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.560307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.560323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.560336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.560366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.570130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.570273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.570299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.570314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.570328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.570358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.580213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.580344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.580374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.580389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.580403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.580433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.590156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.590293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.590319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.590334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.590347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.590380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.600207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.600339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.600365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.600380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.600393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.600425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.610236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.610402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.610433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.610449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.610462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.610493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.620237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.620362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.620389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.620404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.620417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.620448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.630260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.630383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.630409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.630424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.630438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.630468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.640417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.640544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.640571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.640586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.640600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.485 [2024-07-24 18:08:29.640630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.485 qpair failed and we were unable to recover it. 00:25:43.485 [2024-07-24 18:08:29.650344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.485 [2024-07-24 18:08:29.650470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.485 [2024-07-24 18:08:29.650497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.485 [2024-07-24 18:08:29.650512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.485 [2024-07-24 18:08:29.650527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.650576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.660353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.660479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.660505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.660520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.660533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.660566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.670384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.670512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.670539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.670554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.670567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.670598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.680410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.680531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.680558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.680573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.680586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.680618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.690436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.690572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.690599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.690614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.690631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.690663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.700479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.700604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.700637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.700654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.700667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.700711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.710520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.710646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.710673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.710688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.710701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.710731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.720525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.720648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.720675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.720690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.720703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.720747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.730555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.730693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.730719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.730734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.730748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.730780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.740707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.740838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.740865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.740880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.740899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.740932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.486 [2024-07-24 18:08:29.750607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.486 [2024-07-24 18:08:29.750761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.486 [2024-07-24 18:08:29.750790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.486 [2024-07-24 18:08:29.750806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.486 [2024-07-24 18:08:29.750820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.486 [2024-07-24 18:08:29.750853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.486 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.760635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.760772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.760801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.760816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.760829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.760860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.770670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.770799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.770825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.770840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.770854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.770884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.780814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.780954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.780981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.780996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.781010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.781042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.790720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.790862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.790889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.790904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.790918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.790948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.800806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.800963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.800991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.801006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.801023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.801070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.810883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.811031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.811058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.811073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.811086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.811123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.820800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.820934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.820960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.820975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.820988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.821018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.830920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.831042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.831068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.831089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.831110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.831144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.840946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.747 [2024-07-24 18:08:29.841071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.747 [2024-07-24 18:08:29.841099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.747 [2024-07-24 18:08:29.841127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.747 [2024-07-24 18:08:29.841142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.747 [2024-07-24 18:08:29.841175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.747 qpair failed and we were unable to recover it. 00:25:43.747 [2024-07-24 18:08:29.850902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.851023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.851050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.851065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.851078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.851115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.860960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.861086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.861123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.861139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.861152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.861183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.870960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.871134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.871160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.871175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.871188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.871220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.881020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.881151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.881177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.881192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.881205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.881236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.891012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.891143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.891170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.891184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.891197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.891241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.901176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.901346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.901372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.901387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.901400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.901431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.911059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.911215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.911241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.911256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.911269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.911301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.921090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.921233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.921258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.921279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.921293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.921324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.931144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.931283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.931309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.931324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.931337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.931369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.941163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.941304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.941332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.941348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.941362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.941393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.951192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.951313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.951340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.951355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.951369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.951400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.961299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.961448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.961474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.961490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.961503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.961532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.971276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.971403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.748 [2024-07-24 18:08:29.971429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.748 [2024-07-24 18:08:29.971444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.748 [2024-07-24 18:08:29.971457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.748 [2024-07-24 18:08:29.971486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.748 qpair failed and we were unable to recover it. 00:25:43.748 [2024-07-24 18:08:29.981292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.748 [2024-07-24 18:08:29.981418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.749 [2024-07-24 18:08:29.981444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.749 [2024-07-24 18:08:29.981459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.749 [2024-07-24 18:08:29.981472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.749 [2024-07-24 18:08:29.981503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.749 qpair failed and we were unable to recover it. 00:25:43.749 [2024-07-24 18:08:29.991348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.749 [2024-07-24 18:08:29.991475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.749 [2024-07-24 18:08:29.991501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.749 [2024-07-24 18:08:29.991516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.749 [2024-07-24 18:08:29.991529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.749 [2024-07-24 18:08:29.991559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.749 qpair failed and we were unable to recover it. 00:25:43.749 [2024-07-24 18:08:30.001466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.749 [2024-07-24 18:08:30.001646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.749 [2024-07-24 18:08:30.001682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.749 [2024-07-24 18:08:30.001709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.749 [2024-07-24 18:08:30.001726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.749 [2024-07-24 18:08:30.001760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.749 qpair failed and we were unable to recover it. 00:25:43.749 [2024-07-24 18:08:30.011485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:43.749 [2024-07-24 18:08:30.011650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:43.749 [2024-07-24 18:08:30.011686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:43.749 [2024-07-24 18:08:30.011712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:43.749 [2024-07-24 18:08:30.011738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:43.749 [2024-07-24 18:08:30.011788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:43.749 qpair failed and we were unable to recover it. 00:25:44.007 [2024-07-24 18:08:30.021419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.021545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.021575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.021600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.021625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.021674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.031471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.031637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.031664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.031702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.031737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.031798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.041486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.041623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.041650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.041674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.041699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.041758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.051573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.051725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.051753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.051778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.051803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.051857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.061521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.061681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.061708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.061733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.061772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.061819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.071548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.071680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.071707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.071732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.071771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.071837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.081626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.081792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.081819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.081857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.081881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.081943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.091590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.091768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.091796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.091820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.091845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.091892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.101679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.101812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.101845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.101871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.101895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.101957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.111730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.111881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.111916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.111943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.111967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.112010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.121784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.121967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.121998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.122013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.122027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.122057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.131721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.131847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.131874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.131889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.131903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.131933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.008 qpair failed and we were unable to recover it. 00:25:44.008 [2024-07-24 18:08:30.141850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.008 [2024-07-24 18:08:30.141991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.008 [2024-07-24 18:08:30.142028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.008 [2024-07-24 18:08:30.142044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.008 [2024-07-24 18:08:30.142063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.008 [2024-07-24 18:08:30.142094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.151788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.151928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.151955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.151973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.151986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.152015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.161859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.161983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.162009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.162027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.162040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.162071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.171839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.172013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.172039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.172054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.172067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.172099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.181862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.182027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.182053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.182068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.182081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.182118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.191929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.192059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.192085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.192100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.192121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.192152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.201897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.202045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.202070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.202084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.202096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.202134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.211975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.212129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.212163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.212178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.212191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.212221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.221949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.222131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.222158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.222175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.222189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.222233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.232110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.232245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.232273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.232288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.232310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.232343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.241996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.242173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.242200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.242216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.242229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.242260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.252049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.252229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.252256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.252271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.252284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.252316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.262055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.262181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.262207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.262223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.262236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.009 [2024-07-24 18:08:30.262266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.009 qpair failed and we were unable to recover it. 00:25:44.009 [2024-07-24 18:08:30.272091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.009 [2024-07-24 18:08:30.272236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.009 [2024-07-24 18:08:30.272264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.009 [2024-07-24 18:08:30.272279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.009 [2024-07-24 18:08:30.272293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.010 [2024-07-24 18:08:30.272326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.010 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.282115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.282235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.282263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.282278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.282292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.282323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.292161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.292298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.292325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.292341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.292355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.292388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.302156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.302276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.302302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.302317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.302330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.302362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.312238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.312424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.312450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.312465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.312479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.312509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.322314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.322454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.322480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.322502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.322516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.322546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.332274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.332403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.269 [2024-07-24 18:08:30.332428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.269 [2024-07-24 18:08:30.332443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.269 [2024-07-24 18:08:30.332457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.269 [2024-07-24 18:08:30.332488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.269 qpair failed and we were unable to recover it. 00:25:44.269 [2024-07-24 18:08:30.342277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.269 [2024-07-24 18:08:30.342402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.342428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.342443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.342456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.342486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.352303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.352450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.352476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.352490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.352503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.352534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.362335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.362460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.362487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.362502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.362515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.362559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.372367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.372492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.372517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.372532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.372545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.372577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.382475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.382613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.382650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.382665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.382678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.382710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.392435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.392596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.392622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.392637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.392650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.392680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.402471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.402599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.402626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.402641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.402654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.402686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.412611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.412758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.412790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.412806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.412819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.412851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.422561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.422703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.422729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.422744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.422758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.422788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.432677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.432803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.432829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.432844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.432858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.432888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.442594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.442728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.442754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.442769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.442782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.442813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.452597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.452723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.452750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.452765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.452778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.452827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.462654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.462830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.462856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.462871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.270 [2024-07-24 18:08:30.462885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.270 [2024-07-24 18:08:30.462915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.270 qpair failed and we were unable to recover it. 00:25:44.270 [2024-07-24 18:08:30.472640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.270 [2024-07-24 18:08:30.472775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.270 [2024-07-24 18:08:30.472801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.270 [2024-07-24 18:08:30.472816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.472829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.472862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.482780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.482902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.482928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.482943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.482957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.482989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.492698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.492838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.492865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.492880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.492893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.492925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.502717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.502894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.502926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.502943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.502957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.502988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.512800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.512929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.512956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.512971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.512984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.513016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.522860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.522985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.523011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.523026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.523039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.523070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.271 [2024-07-24 18:08:30.532840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.271 [2024-07-24 18:08:30.532983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.271 [2024-07-24 18:08:30.533010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.271 [2024-07-24 18:08:30.533025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.271 [2024-07-24 18:08:30.533039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.271 [2024-07-24 18:08:30.533071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.271 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.542861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.542992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.543020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.530 [2024-07-24 18:08:30.543036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.530 [2024-07-24 18:08:30.543049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.530 [2024-07-24 18:08:30.543088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.530 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.552952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.553120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.553148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.530 [2024-07-24 18:08:30.553163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.530 [2024-07-24 18:08:30.553176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.530 [2024-07-24 18:08:30.553207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.530 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.562877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.563016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.563043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.530 [2024-07-24 18:08:30.563057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.530 [2024-07-24 18:08:30.563071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.530 [2024-07-24 18:08:30.563100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.530 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.572951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.573093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.573124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.530 [2024-07-24 18:08:30.573140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.530 [2024-07-24 18:08:30.573154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.530 [2024-07-24 18:08:30.573184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.530 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.582983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.583118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.583145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.530 [2024-07-24 18:08:30.583160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.530 [2024-07-24 18:08:30.583173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.530 [2024-07-24 18:08:30.583205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.530 qpair failed and we were unable to recover it. 00:25:44.530 [2024-07-24 18:08:30.592993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.530 [2024-07-24 18:08:30.593136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.530 [2024-07-24 18:08:30.593163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.593179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.593192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.593224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.603014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.603154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.603180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.603196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.603209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.603240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.613025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.613152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.613179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.613194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.613208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.613239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.623044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.623206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.623232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.623247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.623260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.623290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.633093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.633235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.633261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.633276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.633295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.633328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.643133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.643271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.643298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.643313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.643326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.643357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.653176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.653300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.653326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.653341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.653355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.653387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.663177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.663300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.663327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.663342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.663354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.663385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.673259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.673422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.673449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.673464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.673477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.673509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.683357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.683497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.683525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.683541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.683555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.683586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.693403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.693554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.693581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.693596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.693610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.693640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.703335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.703461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.703486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.703501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.531 [2024-07-24 18:08:30.703515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.531 [2024-07-24 18:08:30.703546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.531 qpair failed and we were unable to recover it. 00:25:44.531 [2024-07-24 18:08:30.713348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.531 [2024-07-24 18:08:30.713480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.531 [2024-07-24 18:08:30.713506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.531 [2024-07-24 18:08:30.713521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.713534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.713565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.723377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.723510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.723536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.723557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.723571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.723602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.733422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.733556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.733582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.733597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.733610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.733642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.743429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.743561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.743588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.743604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.743618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.743648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.753530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.753655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.753680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.753695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.753709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.753738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.763553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.763688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.763714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.763730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.763743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.763773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.773523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.773657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.773682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.773697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.773710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.773742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.783586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.783728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.783754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.783768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.783782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.783813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.532 [2024-07-24 18:08:30.793616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.532 [2024-07-24 18:08:30.793788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.532 [2024-07-24 18:08:30.793815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.532 [2024-07-24 18:08:30.793830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.532 [2024-07-24 18:08:30.793843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.532 [2024-07-24 18:08:30.793873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.532 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.803688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.803826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.803859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.791 [2024-07-24 18:08:30.803888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.791 [2024-07-24 18:08:30.803911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.791 [2024-07-24 18:08:30.803946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.791 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.813634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.813772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.813805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.791 [2024-07-24 18:08:30.813823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.791 [2024-07-24 18:08:30.813837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.791 [2024-07-24 18:08:30.813868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.791 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.823662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.823796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.823823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.791 [2024-07-24 18:08:30.823838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.791 [2024-07-24 18:08:30.823852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.791 [2024-07-24 18:08:30.823882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.791 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.833701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.833831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.833858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.791 [2024-07-24 18:08:30.833873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.791 [2024-07-24 18:08:30.833886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.791 [2024-07-24 18:08:30.833918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.791 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.843702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.843881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.843907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.791 [2024-07-24 18:08:30.843922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.791 [2024-07-24 18:08:30.843935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.791 [2024-07-24 18:08:30.843967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.791 qpair failed and we were unable to recover it. 00:25:44.791 [2024-07-24 18:08:30.853748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.791 [2024-07-24 18:08:30.853886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.791 [2024-07-24 18:08:30.853914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.853933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.853947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.853988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.863750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.863875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.863902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.863917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.863930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.863961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.873827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.873976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.874003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.874019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.874034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.874078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.883782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.883905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.883932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.883946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.883960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.883990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.893960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.894087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.894120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.894137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.894150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.894182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.903884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.904010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.904041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.904057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.904071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.904108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.913913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.914084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.914120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.914137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.914150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.914181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.923906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.924028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.924054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.924068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.924082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.924121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.933969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.934114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.934141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.934156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.934169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.934201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.943953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.944088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.944121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.944136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.944150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.944186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.954107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.954285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.954310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.954325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.954337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.954368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.964021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.964154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.964179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.964194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.964207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.964237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.974074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.974229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.974255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.974270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.974284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.792 [2024-07-24 18:08:30.974315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.792 qpair failed and we were unable to recover it. 00:25:44.792 [2024-07-24 18:08:30.984089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.792 [2024-07-24 18:08:30.984229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.792 [2024-07-24 18:08:30.984255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.792 [2024-07-24 18:08:30.984270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.792 [2024-07-24 18:08:30.984283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:30.984326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:30.994162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:30.994327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:30.994359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:30.994375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:30.994388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:30.994419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.004145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.004268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.004295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.004310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.004323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.004353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.014169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.014320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.014346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.014361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.014375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.014406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.024277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.024446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.024472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.024486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.024500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.024531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.034317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.034442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.034468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.034483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.034502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.034534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.044273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.044400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.044426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.044440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.044453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.044485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:44.793 [2024-07-24 18:08:31.054327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:44.793 [2024-07-24 18:08:31.054463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:44.793 [2024-07-24 18:08:31.054490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:44.793 [2024-07-24 18:08:31.054505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:44.793 [2024-07-24 18:08:31.054518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:44.793 [2024-07-24 18:08:31.054551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:44.793 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.064370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.064542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.064578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.064605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.064624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.064671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.074387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.074507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.074534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.074550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.074563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.074595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.084356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.084483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.084510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.084525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.084538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.084569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.094412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.094539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.094566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.094580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.094594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.094624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.104441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.104567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.104594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.104609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.104623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.104653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.114476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.114605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.114632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.114647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.114660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.114703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.124513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.124636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.124662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.124684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.124698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.124729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.134511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.134642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.134668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.134683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.134697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.134727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.144665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.144798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.144823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.144838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.144852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.144882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.052 [2024-07-24 18:08:31.154577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.052 [2024-07-24 18:08:31.154724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.052 [2024-07-24 18:08:31.154750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.052 [2024-07-24 18:08:31.154765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.052 [2024-07-24 18:08:31.154778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.052 [2024-07-24 18:08:31.154821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.052 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.164615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.164740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.164766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.164781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.164794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.164827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.174650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.174777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.174803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.174818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.174831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.174863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.184654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.184829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.184855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.184870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.184883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.184913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.194690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.194809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.194836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.194851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.194865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.194896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.204722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.204843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.204867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.204882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.204894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.204924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.214763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.214903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.214929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.214950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.214965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.214995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.224817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.224947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.224974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.224993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.225008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.225040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.234946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.235113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.235139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.235154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.235167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.235198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.244839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.244999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.245025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.245040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.245053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.245083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.254868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.255026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.255053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.255068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.255081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.255119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.053 [2024-07-24 18:08:31.264967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.053 [2024-07-24 18:08:31.265099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.053 [2024-07-24 18:08:31.265132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.053 [2024-07-24 18:08:31.265147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.053 [2024-07-24 18:08:31.265161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.053 [2024-07-24 18:08:31.265192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.053 qpair failed and we were unable to recover it. 00:25:45.054 [2024-07-24 18:08:31.274902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.054 [2024-07-24 18:08:31.275026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.054 [2024-07-24 18:08:31.275052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.054 [2024-07-24 18:08:31.275067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.054 [2024-07-24 18:08:31.275080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.054 [2024-07-24 18:08:31.275116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.054 qpair failed and we were unable to recover it. 00:25:45.054 [2024-07-24 18:08:31.284939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.054 [2024-07-24 18:08:31.285066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.054 [2024-07-24 18:08:31.285092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.054 [2024-07-24 18:08:31.285115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.054 [2024-07-24 18:08:31.285130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.054 [2024-07-24 18:08:31.285160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.054 qpair failed and we were unable to recover it. 00:25:45.054 [2024-07-24 18:08:31.295037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.054 [2024-07-24 18:08:31.295186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.054 [2024-07-24 18:08:31.295213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.054 [2024-07-24 18:08:31.295228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.054 [2024-07-24 18:08:31.295241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.054 [2024-07-24 18:08:31.295274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.054 qpair failed and we were unable to recover it. 00:25:45.054 [2024-07-24 18:08:31.304994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.054 [2024-07-24 18:08:31.305138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.054 [2024-07-24 18:08:31.305170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.054 [2024-07-24 18:08:31.305186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.054 [2024-07-24 18:08:31.305199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.054 [2024-07-24 18:08:31.305243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.054 qpair failed and we were unable to recover it. 00:25:45.054 [2024-07-24 18:08:31.315027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.054 [2024-07-24 18:08:31.315157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.054 [2024-07-24 18:08:31.315184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.054 [2024-07-24 18:08:31.315199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.054 [2024-07-24 18:08:31.315213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.054 [2024-07-24 18:08:31.315244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.054 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.325082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.325216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.325244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.325260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.325273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.325306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.335096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.335237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.335264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.335282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.335295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.335328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.345141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.345266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.345292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.345307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.345320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.345357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.355239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.355364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.355390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.355404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.355417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.355449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.365169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.365295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.365321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.365336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.365349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.365381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.375208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.375332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.375358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.375373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.313 [2024-07-24 18:08:31.375387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.313 [2024-07-24 18:08:31.375429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.313 qpair failed and we were unable to recover it. 00:25:45.313 [2024-07-24 18:08:31.385254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.313 [2024-07-24 18:08:31.385382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.313 [2024-07-24 18:08:31.385408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.313 [2024-07-24 18:08:31.385423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.385439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.385482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.395275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.395404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.395436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.395453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.395468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.395500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.405299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.405423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.405449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.405464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.405477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.405521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.415340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.415467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.415493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.415508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.415523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.415554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.425344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.425475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.425501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.425515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.425528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.425558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.435398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.435524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.435550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.435566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.435585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.435616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.445453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.445581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.445607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.445621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.445635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.445664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.455437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.455565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.455591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.455606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.455618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.455648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.465490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.465613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.465639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.465654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.465667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.465696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.475503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.475626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.475651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.475666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.475680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.475710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.485500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.485629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.485655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.485670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.485683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.485713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.495569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.495741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.495767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.495782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.495795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.495825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.505605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.505729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.505754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.505768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.505781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.505813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.314 [2024-07-24 18:08:31.515610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.314 [2024-07-24 18:08:31.515734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.314 [2024-07-24 18:08:31.515760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.314 [2024-07-24 18:08:31.515775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.314 [2024-07-24 18:08:31.515788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.314 [2024-07-24 18:08:31.515818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.314 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.525635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.525809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.525834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.525855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.525870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.525900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.535700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.535855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.535880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.535895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.535908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.535938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.545688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.545810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.545838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.545857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.545871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.545902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.555718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.555842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.555868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.555883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.555896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.555926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.565756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.565876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.565902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.565917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.565930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.565976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.315 [2024-07-24 18:08:31.575748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.315 [2024-07-24 18:08:31.575876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.315 [2024-07-24 18:08:31.575901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.315 [2024-07-24 18:08:31.575916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.315 [2024-07-24 18:08:31.575932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.315 [2024-07-24 18:08:31.575962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.315 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.585819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.585946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.585974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.585990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.586003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.586036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.595833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.595957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.595984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.595999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.596012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.596044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.605908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.606058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.606084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.606098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.606125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.606159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.615881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.616006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.616031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.616055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.616069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.616099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.625895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.626021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.626046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.626062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.626075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.626112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.635947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.636116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.636146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.636165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.636179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.636212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.646052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.646181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.646208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.646223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.646237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.646266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.656086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.656219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.656245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.656259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.656272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.656302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.666024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.666157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.666183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.666198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.666211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.666242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.574 qpair failed and we were unable to recover it. 00:25:45.574 [2024-07-24 18:08:31.676054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.574 [2024-07-24 18:08:31.676186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.574 [2024-07-24 18:08:31.676213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.574 [2024-07-24 18:08:31.676227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.574 [2024-07-24 18:08:31.676241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.574 [2024-07-24 18:08:31.676271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.686065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.686212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.686239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.686253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.686266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.686297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.696132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.696260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.696286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.696301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.696314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.696344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.706160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.706295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.706326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.706342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.706355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.706387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.716167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.716293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.716318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.716333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.716345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.716376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.726192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.726319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.726345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.726360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.726373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.726418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.736309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.736435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.736460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.736475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.736488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.736518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.746275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.746451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.746477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.746492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.746505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.746553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.756293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.756411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.756437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.756452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.756465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.756497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.766337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.766456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.766483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.766498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.766510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.766540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.776437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.776566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.776592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.776607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.776620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.776652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.786367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.786490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.786516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.786531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.786544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.786576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.796393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.796513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.796544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.796560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.796573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.796603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.575 [2024-07-24 18:08:31.806411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.575 [2024-07-24 18:08:31.806530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.575 [2024-07-24 18:08:31.806556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.575 [2024-07-24 18:08:31.806570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.575 [2024-07-24 18:08:31.806584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.575 [2024-07-24 18:08:31.806613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.575 qpair failed and we were unable to recover it. 00:25:45.576 [2024-07-24 18:08:31.816475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.576 [2024-07-24 18:08:31.816611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.576 [2024-07-24 18:08:31.816636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.576 [2024-07-24 18:08:31.816651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.576 [2024-07-24 18:08:31.816664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.576 [2024-07-24 18:08:31.816694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.576 qpair failed and we were unable to recover it. 00:25:45.576 [2024-07-24 18:08:31.826512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.576 [2024-07-24 18:08:31.826642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.576 [2024-07-24 18:08:31.826668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.576 [2024-07-24 18:08:31.826683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.576 [2024-07-24 18:08:31.826696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.576 [2024-07-24 18:08:31.826726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.576 qpair failed and we were unable to recover it. 00:25:45.576 [2024-07-24 18:08:31.836537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.576 [2024-07-24 18:08:31.836666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.576 [2024-07-24 18:08:31.836692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.576 [2024-07-24 18:08:31.836707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.576 [2024-07-24 18:08:31.836725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.576 [2024-07-24 18:08:31.836758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.576 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.846532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.846654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.846683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.846698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.846711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.846744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.856575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.856753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.856781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.856796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.856809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.856840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.866638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.866769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.866795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.866811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.866824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.866870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.876633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.876757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.876784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.876801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.876815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.876848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.886636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.886761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.886788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.886803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.886816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.886846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.896778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.896906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.896932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.896947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.896960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.896989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.906782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.906909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.906935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.906950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.906963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.906995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.916730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.916860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.916886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.916901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.916913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.916944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.926768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.926891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.926918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.926932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.926951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.926995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.834 [2024-07-24 18:08:31.936793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.834 [2024-07-24 18:08:31.936918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.834 [2024-07-24 18:08:31.936944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.834 [2024-07-24 18:08:31.936958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.834 [2024-07-24 18:08:31.936972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.834 [2024-07-24 18:08:31.937014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.834 qpair failed and we were unable to recover it. 00:25:45.835 [2024-07-24 18:08:31.946808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.835 [2024-07-24 18:08:31.946931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.835 [2024-07-24 18:08:31.946957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.835 [2024-07-24 18:08:31.946973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.835 [2024-07-24 18:08:31.946986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.835 [2024-07-24 18:08:31.947016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.835 qpair failed and we were unable to recover it. 00:25:45.835 [2024-07-24 18:08:31.956842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.835 [2024-07-24 18:08:31.956970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.835 [2024-07-24 18:08:31.956996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.835 [2024-07-24 18:08:31.957012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.835 [2024-07-24 18:08:31.957025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.835 [2024-07-24 18:08:31.957055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.835 qpair failed and we were unable to recover it. 00:25:45.835 [2024-07-24 18:08:31.966927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:45.835 [2024-07-24 18:08:31.967053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:45.835 [2024-07-24 18:08:31.967081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:45.835 [2024-07-24 18:08:31.967096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:45.835 [2024-07-24 18:08:31.967119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f600c000b90 00:25:45.835 [2024-07-24 18:08:31.967164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:45.835 qpair failed and we were unable to recover it. 00:25:45.835 [2024-07-24 18:08:31.967205] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:45.835 A controller has encountered a failure and is being reset. 00:25:45.835 Controller properly reset. 00:25:47.208 Initializing NVMe Controllers 00:25:47.208 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:47.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:47.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:47.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:47.208 Initialization complete. Launching workers. 00:25:47.208 Starting thread on core 1 00:25:47.208 Starting thread on core 2 00:25:47.208 Starting thread on core 3 00:25:47.208 Starting thread on core 0 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:47.208 00:25:47.208 real 0m10.727s 00:25:47.208 user 0m21.239s 00:25:47.208 sys 0m5.813s 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:47.208 ************************************ 00:25:47.208 END TEST nvmf_target_disconnect_tc2 00:25:47.208 ************************************ 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.208 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.208 rmmod nvme_tcp 00:25:47.208 rmmod nvme_fabrics 00:25:47.208 rmmod nvme_keyring 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2892777 ']' 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2892777 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2892777 ']' 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2892777 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2892777 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2892777' 00:25:47.467 killing process with pid 2892777 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2892777 00:25:47.467 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2892777 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.727 18:08:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:49.629 00:25:49.629 real 0m15.519s 00:25:49.629 user 0m46.914s 00:25:49.629 sys 0m7.918s 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:49.629 ************************************ 00:25:49.629 END TEST nvmf_target_disconnect 00:25:49.629 ************************************ 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:49.629 00:25:49.629 real 5m3.627s 00:25:49.629 user 10m44.373s 00:25:49.629 sys 1m13.497s 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.629 18:08:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.629 ************************************ 00:25:49.629 END TEST nvmf_host 00:25:49.629 ************************************ 00:25:49.629 00:25:49.629 real 19m40.859s 00:25:49.629 user 46m3.055s 00:25:49.629 sys 5m7.836s 00:25:49.629 18:08:35 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.629 18:08:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.629 ************************************ 00:25:49.629 END TEST nvmf_tcp 00:25:49.629 ************************************ 00:25:49.890 18:08:35 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:49.890 18:08:35 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:49.890 18:08:35 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:49.890 18:08:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.890 18:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:49.890 ************************************ 00:25:49.890 START TEST spdkcli_nvmf_tcp 00:25:49.890 ************************************ 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:49.890 * Looking for test storage... 00:25:49.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2893977 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2893977 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2893977 ']' 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.890 18:08:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.890 [2024-07-24 18:08:36.038095] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:25:49.890 [2024-07-24 18:08:36.038191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893977 ] 00:25:49.890 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.890 [2024-07-24 18:08:36.095721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:50.151 [2024-07-24 18:08:36.205034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.151 [2024-07-24 18:08:36.205038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.151 18:08:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:50.151 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:50.151 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:50.151 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:50.151 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:50.151 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:50.151 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:50.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:50.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:50.151 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:50.151 ' 00:25:52.680 [2024-07-24 18:08:38.896626] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.053 [2024-07-24 18:08:40.148976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:56.581 [2024-07-24 18:08:42.404150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:58.480 [2024-07-24 18:08:44.338215] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:59.854 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:59.855 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:59.855 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:59.855 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:59.855 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:59.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:59.855 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:59.855 18:08:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:00.112 18:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 18:08:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:00.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:00.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:00.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:00.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:00.370 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:00.370 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:00.370 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:00.370 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:00.370 ' 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:05.631 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:05.631 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:05.631 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:05.631 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:05.631 18:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2893977 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2893977 ']' 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2893977 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2893977 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2893977' 00:26:05.632 killing process with pid 2893977 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2893977 00:26:05.632 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2893977 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2893977 ']' 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2893977 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2893977 ']' 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2893977 00:26:05.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2893977) - No such process 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2893977 is not found' 00:26:05.890 Process with pid 2893977 is not found 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:05.890 00:26:05.890 real 0m16.019s 00:26:05.890 user 0m33.755s 00:26:05.890 sys 0m0.823s 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.890 18:08:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.890 ************************************ 00:26:05.890 END TEST spdkcli_nvmf_tcp 00:26:05.890 ************************************ 00:26:05.890 18:08:51 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:05.890 18:08:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:05.890 18:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.890 18:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:05.890 ************************************ 00:26:05.890 START TEST nvmf_identify_passthru 00:26:05.890 ************************************ 00:26:05.891 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:05.891 * Looking for test storage... 00:26:05.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:05.891 18:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.891 18:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:05.891 18:08:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.891 18:08:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.891 18:08:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:05.891 18:08:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.891 18:08:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.891 18:08:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:07.791 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:07.791 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.791 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:07.792 Found net devices under 0000:09:00.0: cvl_0_0 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:07.792 Found net devices under 0000:09:00.1: cvl_0_1 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:07.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:07.792 00:26:07.792 --- 10.0.0.2 ping statistics --- 00:26:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.792 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:07.792 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:26:07.792 00:26:07.792 --- 10.0.0.1 ping statistics --- 00:26:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.792 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:07.792 18:08:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:07.792 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:07.792 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:07.792 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:26:08.051 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:26:08.051 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:26:08.051 18:08:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:0b:00.0 00:26:08.051 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:26:08.051 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:26:08.051 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:26:08.051 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:08.051 18:08:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:08.051 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.235 18:08:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:26:12.235 18:08:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:26:12.235 18:08:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:12.235 18:08:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:12.235 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2898587 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:16.417 18:09:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2898587 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2898587 ']' 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.417 18:09:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.417 [2024-07-24 18:09:02.423497] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:26:16.417 [2024-07-24 18:09:02.423595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.417 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.417 [2024-07-24 18:09:02.492046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.417 [2024-07-24 18:09:02.610258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.417 [2024-07-24 18:09:02.610319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.417 [2024-07-24 18:09:02.610335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.417 [2024-07-24 18:09:02.610349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.417 [2024-07-24 18:09:02.610361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.417 [2024-07-24 18:09:02.610451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.417 [2024-07-24 18:09:02.610527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.417 [2024-07-24 18:09:02.610626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.417 [2024-07-24 18:09:02.610629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.348 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.348 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:17.348 18:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:17.348 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.348 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:17.348 INFO: Log level set to 20 00:26:17.348 INFO: Requests: 00:26:17.348 { 00:26:17.348 "jsonrpc": "2.0", 00:26:17.348 "method": "nvmf_set_config", 00:26:17.348 "id": 1, 00:26:17.348 "params": { 00:26:17.348 "admin_cmd_passthru": { 00:26:17.348 "identify_ctrlr": true 00:26:17.348 } 00:26:17.348 } 00:26:17.348 } 00:26:17.348 00:26:17.348 INFO: response: 00:26:17.348 { 00:26:17.348 "jsonrpc": "2.0", 00:26:17.348 "id": 1, 00:26:17.348 "result": true 00:26:17.348 } 00:26:17.349 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.349 18:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:17.349 INFO: Setting log level to 20 00:26:17.349 INFO: Setting log level to 20 00:26:17.349 INFO: Log level set to 20 00:26:17.349 INFO: Log level set to 20 00:26:17.349 INFO: Requests: 00:26:17.349 { 00:26:17.349 "jsonrpc": "2.0", 00:26:17.349 "method": "framework_start_init", 00:26:17.349 "id": 1 00:26:17.349 } 00:26:17.349 00:26:17.349 INFO: Requests: 00:26:17.349 { 00:26:17.349 "jsonrpc": "2.0", 00:26:17.349 "method": "framework_start_init", 00:26:17.349 "id": 1 00:26:17.349 } 00:26:17.349 00:26:17.349 [2024-07-24 18:09:03.470343] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:17.349 INFO: response: 00:26:17.349 { 00:26:17.349 "jsonrpc": "2.0", 00:26:17.349 "id": 1, 00:26:17.349 "result": true 00:26:17.349 } 00:26:17.349 00:26:17.349 INFO: response: 00:26:17.349 { 00:26:17.349 "jsonrpc": "2.0", 00:26:17.349 "id": 1, 00:26:17.349 "result": true 00:26:17.349 } 00:26:17.349 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.349 18:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:17.349 INFO: Setting log level to 40 00:26:17.349 INFO: Setting log level to 40 00:26:17.349 INFO: Setting log level to 40 00:26:17.349 [2024-07-24 18:09:03.480360] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.349 18:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:17.349 18:09:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.349 18:09:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.685 Nvme0n1 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.685 [2024-07-24 18:09:06.373829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.685 [ 00:26:20.685 { 00:26:20.685 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.685 "subtype": "Discovery", 00:26:20.685 "listen_addresses": [], 00:26:20.685 "allow_any_host": true, 00:26:20.685 "hosts": [] 00:26:20.685 }, 00:26:20.685 { 00:26:20.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.685 "subtype": "NVMe", 00:26:20.685 "listen_addresses": [ 00:26:20.685 { 00:26:20.685 "trtype": "TCP", 00:26:20.685 "adrfam": "IPv4", 00:26:20.685 "traddr": "10.0.0.2", 00:26:20.685 "trsvcid": "4420" 00:26:20.685 } 00:26:20.685 ], 00:26:20.685 "allow_any_host": true, 00:26:20.685 "hosts": [], 00:26:20.685 "serial_number": "SPDK00000000000001", 00:26:20.685 "model_number": "SPDK bdev Controller", 00:26:20.685 "max_namespaces": 1, 00:26:20.685 "min_cntlid": 1, 00:26:20.685 "max_cntlid": 65519, 00:26:20.685 "namespaces": [ 00:26:20.685 { 00:26:20.685 "nsid": 1, 00:26:20.685 "bdev_name": "Nvme0n1", 00:26:20.685 "name": "Nvme0n1", 00:26:20.685 "nguid": "49F4A1407D0548B09F5DADCB6DFCDAC4", 00:26:20.685 "uuid": "49f4a140-7d05-48b0-9f5d-adcb6dfcdac4" 00:26:20.685 } 00:26:20.685 ] 00:26:20.685 } 00:26:20.685 ] 00:26:20.685 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:20.685 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:20.685 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:20.686 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:20.686 18:09:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:20.686 rmmod nvme_tcp 00:26:20.686 rmmod nvme_fabrics 00:26:20.686 rmmod nvme_keyring 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2898587 ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2898587 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2898587 ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2898587 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2898587 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2898587' 00:26:20.686 killing process with pid 2898587 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2898587 00:26:20.686 18:09:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2898587 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.585 18:09:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.585 18:09:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:22.585 18:09:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.488 18:09:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.488 00:26:24.488 real 0m18.457s 00:26:24.488 user 0m29.535s 00:26:24.488 sys 0m2.198s 00:26:24.488 18:09:10 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:24.488 18:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:24.488 ************************************ 00:26:24.488 END TEST nvmf_identify_passthru 00:26:24.488 ************************************ 00:26:24.488 18:09:10 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:24.488 18:09:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:24.488 18:09:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.488 18:09:10 -- common/autotest_common.sh@10 -- # set +x 00:26:24.488 ************************************ 00:26:24.488 START TEST nvmf_dif 00:26:24.489 ************************************ 00:26:24.489 18:09:10 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:24.489 * Looking for test storage... 00:26:24.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.489 18:09:10 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.489 18:09:10 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.489 18:09:10 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.489 18:09:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.489 18:09:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.489 18:09:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.489 18:09:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:24.489 18:09:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:24.489 18:09:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.489 18:09:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:24.489 18:09:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.489 18:09:10 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.489 18:09:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:26.393 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:26.393 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:26.393 Found net devices under 0000:09:00.0: cvl_0_0 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:26.393 Found net devices under 0000:09:00.1: cvl_0_1 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:26.393 18:09:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:26.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:26.652 00:26:26.652 --- 10.0.0.2 ping statistics --- 00:26:26.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.652 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:26.652 00:26:26.652 --- 10.0.0.1 ping statistics --- 00:26:26.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.652 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:26.652 18:09:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:27.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:27.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:27.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:27.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:27.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:27.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:27.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:27.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:27.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:27.588 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:27.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:27.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:27.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:27.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:27.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:27.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:27.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.847 18:09:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:27.847 18:09:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.847 18:09:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.847 18:09:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2902372 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:27.847 18:09:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2902372 00:26:27.847 18:09:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2902372 ']' 00:26:27.848 18:09:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.848 18:09:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.848 18:09:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.848 18:09:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.848 18:09:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:27.848 [2024-07-24 18:09:14.026541] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:26:27.848 [2024-07-24 18:09:14.026610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.848 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.848 [2024-07-24 18:09:14.092571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.106 [2024-07-24 18:09:14.210998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.106 [2024-07-24 18:09:14.211062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.106 [2024-07-24 18:09:14.211079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.106 [2024-07-24 18:09:14.211092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.106 [2024-07-24 18:09:14.211112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.106 [2024-07-24 18:09:14.211160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:28.106 18:09:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:28.106 18:09:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.106 18:09:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:28.106 18:09:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:28.106 [2024-07-24 18:09:14.365473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.106 18:09:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:28.106 18:09:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 ************************************ 00:26:28.365 START TEST fio_dif_1_default 00:26:28.365 ************************************ 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 bdev_null0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:28.365 [2024-07-24 18:09:14.425799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:28.365 { 00:26:28.365 "params": { 00:26:28.365 "name": "Nvme$subsystem", 00:26:28.365 "trtype": "$TEST_TRANSPORT", 00:26:28.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.365 "adrfam": "ipv4", 00:26:28.365 "trsvcid": "$NVMF_PORT", 00:26:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.365 "hdgst": ${hdgst:-false}, 00:26:28.365 "ddgst": ${ddgst:-false} 00:26:28.365 }, 00:26:28.365 "method": "bdev_nvme_attach_controller" 00:26:28.365 } 00:26:28.365 EOF 00:26:28.365 )") 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:28.365 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:28.366 "params": { 00:26:28.366 "name": "Nvme0", 00:26:28.366 "trtype": "tcp", 00:26:28.366 "traddr": "10.0.0.2", 00:26:28.366 "adrfam": "ipv4", 00:26:28.366 "trsvcid": "4420", 00:26:28.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:28.366 "hdgst": false, 00:26:28.366 "ddgst": false 00:26:28.366 }, 00:26:28.366 "method": "bdev_nvme_attach_controller" 00:26:28.366 }' 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:28.366 18:09:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.624 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:28.624 fio-3.35 00:26:28.624 Starting 1 thread 00:26:28.624 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.820 00:26:40.820 filename0: (groupid=0, jobs=1): err= 0: pid=2902602: Wed Jul 24 18:09:25 2024 00:26:40.820 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:26:40.820 slat (nsec): min=4724, max=32862, avg=9422.83, stdev=2642.87 00:26:40.820 clat (usec): min=40881, max=48418, avg=41005.17, stdev=480.03 00:26:40.820 lat (usec): min=40888, max=48432, avg=41014.60, stdev=479.98 00:26:40.820 clat percentiles (usec): 00:26:40.820 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:40.820 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:40.820 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:40.820 | 99.00th=[41157], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:26:40.820 | 99.99th=[48497] 00:26:40.820 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:26:40.820 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:40.820 lat (msec) : 50=100.00% 00:26:40.820 cpu : usr=89.84%, sys=9.91%, ctx=12, majf=0, minf=242 00:26:40.820 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.820 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.820 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:40.820 00:26:40.820 Run status group 0 (all jobs): 00:26:40.820 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.820 00:26:40.820 real 0m11.087s 00:26:40.820 user 0m10.118s 00:26:40.820 sys 0m1.261s 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:40.820 ************************************ 00:26:40.820 END TEST fio_dif_1_default 00:26:40.820 ************************************ 00:26:40.820 18:09:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:40.820 18:09:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:40.820 18:09:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.820 18:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.820 ************************************ 00:26:40.820 START TEST fio_dif_1_multi_subsystems 00:26:40.820 ************************************ 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:40.820 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 bdev_null0 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 [2024-07-24 18:09:25.563858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 bdev_null1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:40.821 { 00:26:40.821 "params": { 00:26:40.821 "name": "Nvme$subsystem", 00:26:40.821 "trtype": "$TEST_TRANSPORT", 00:26:40.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.821 "adrfam": "ipv4", 00:26:40.821 "trsvcid": "$NVMF_PORT", 00:26:40.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.821 "hdgst": ${hdgst:-false}, 00:26:40.821 "ddgst": ${ddgst:-false} 00:26:40.821 }, 00:26:40.821 "method": "bdev_nvme_attach_controller" 00:26:40.821 } 00:26:40.821 EOF 00:26:40.821 )") 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:40.821 { 00:26:40.821 "params": { 00:26:40.821 "name": "Nvme$subsystem", 00:26:40.821 "trtype": "$TEST_TRANSPORT", 00:26:40.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.821 "adrfam": "ipv4", 00:26:40.821 "trsvcid": "$NVMF_PORT", 00:26:40.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.821 "hdgst": ${hdgst:-false}, 00:26:40.821 "ddgst": ${ddgst:-false} 00:26:40.821 }, 00:26:40.821 "method": "bdev_nvme_attach_controller" 00:26:40.821 } 00:26:40.821 EOF 00:26:40.821 )") 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:40.821 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:40.821 "params": { 00:26:40.821 "name": "Nvme0", 00:26:40.821 "trtype": "tcp", 00:26:40.821 "traddr": "10.0.0.2", 00:26:40.821 "adrfam": "ipv4", 00:26:40.821 "trsvcid": "4420", 00:26:40.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:40.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:40.822 "hdgst": false, 00:26:40.822 "ddgst": false 00:26:40.822 }, 00:26:40.822 "method": "bdev_nvme_attach_controller" 00:26:40.822 },{ 00:26:40.822 "params": { 00:26:40.822 "name": "Nvme1", 00:26:40.822 "trtype": "tcp", 00:26:40.822 "traddr": "10.0.0.2", 00:26:40.822 "adrfam": "ipv4", 00:26:40.822 "trsvcid": "4420", 00:26:40.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:40.822 "hdgst": false, 00:26:40.822 "ddgst": false 00:26:40.822 }, 00:26:40.822 "method": "bdev_nvme_attach_controller" 00:26:40.822 }' 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:40.822 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.822 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:40.822 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:40.822 fio-3.35 00:26:40.822 Starting 2 threads 00:26:40.822 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.807 00:26:50.807 filename0: (groupid=0, jobs=1): err= 0: pid=2904005: Wed Jul 24 18:09:36 2024 00:26:50.807 read: IOPS=143, BW=574KiB/s (588kB/s)(5744KiB/10009msec) 00:26:50.807 slat (nsec): min=6464, max=39419, avg=9778.51, stdev=2511.37 00:26:50.807 clat (usec): min=725, max=46323, avg=27848.59, stdev=18910.80 00:26:50.807 lat (usec): min=733, max=46339, avg=27858.37, stdev=18910.55 00:26:50.807 clat percentiles (usec): 00:26:50.807 | 1.00th=[ 750], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 848], 00:26:50.807 | 30.00th=[ 906], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:50.807 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:50.807 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:26:50.807 | 99.99th=[46400] 00:26:50.807 bw ( KiB/s): min= 384, max= 768, per=59.44%, avg=572.80, stdev=179.49, samples=20 00:26:50.807 iops : min= 96, max= 192, avg=143.20, stdev=44.87, samples=20 00:26:50.807 lat (usec) : 750=1.46%, 1000=30.85% 00:26:50.807 lat (msec) : 2=0.56%, 50=67.13% 00:26:50.807 cpu : usr=94.31%, sys=5.42%, ctx=15, majf=0, minf=140 00:26:50.807 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.807 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.807 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=2904006: Wed Jul 24 18:09:36 2024 00:26:50.807 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10008msec) 00:26:50.807 slat (nsec): min=8045, max=31543, avg=10189.10, stdev=2940.92 00:26:50.807 clat (usec): min=40807, max=46339, avg=41150.95, stdev=493.23 00:26:50.807 lat (usec): min=40815, max=46355, avg=41161.14, stdev=493.91 00:26:50.807 clat percentiles (usec): 00:26:50.807 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:50.807 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:50.807 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:50.807 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:26:50.807 | 99.99th=[46400] 00:26:50.807 bw ( KiB/s): min= 384, max= 416, per=40.21%, avg=387.20, stdev= 9.85, samples=20 00:26:50.808 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:26:50.808 lat (msec) : 50=100.00% 00:26:50.808 cpu : usr=93.86%, sys=5.88%, ctx=10, majf=0, minf=118 00:26:50.808 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.808 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.808 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:50.808 00:26:50.808 Run status group 0 (all jobs): 00:26:50.808 READ: bw=962KiB/s (985kB/s), 388KiB/s-574KiB/s (398kB/s-588kB/s), io=9632KiB (9863kB), run=10008-10009msec 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 00:26:50.808 real 0m11.324s 00:26:50.808 user 0m20.331s 00:26:50.808 sys 0m1.433s 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 ************************************ 00:26:50.808 END TEST fio_dif_1_multi_subsystems 00:26:50.808 ************************************ 00:26:50.808 18:09:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:50.808 18:09:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:50.808 18:09:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 ************************************ 00:26:50.808 START TEST fio_dif_rand_params 00:26:50.808 ************************************ 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 bdev_null0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 [2024-07-24 18:09:36.932288] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:50.808 { 00:26:50.808 "params": { 00:26:50.808 "name": "Nvme$subsystem", 00:26:50.808 "trtype": "$TEST_TRANSPORT", 00:26:50.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.808 "adrfam": "ipv4", 00:26:50.808 "trsvcid": "$NVMF_PORT", 00:26:50.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.808 "hdgst": ${hdgst:-false}, 00:26:50.808 "ddgst": ${ddgst:-false} 00:26:50.808 }, 00:26:50.808 "method": "bdev_nvme_attach_controller" 00:26:50.808 } 00:26:50.808 EOF 00:26:50.808 )") 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:50.808 18:09:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:50.808 "params": { 00:26:50.808 "name": "Nvme0", 00:26:50.808 "trtype": "tcp", 00:26:50.808 "traddr": "10.0.0.2", 00:26:50.808 "adrfam": "ipv4", 00:26:50.808 "trsvcid": "4420", 00:26:50.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:50.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:50.808 "hdgst": false, 00:26:50.808 "ddgst": false 00:26:50.808 }, 00:26:50.809 "method": "bdev_nvme_attach_controller" 00:26:50.809 }' 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:50.809 18:09:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:51.068 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:51.068 ... 00:26:51.068 fio-3.35 00:26:51.068 Starting 3 threads 00:26:51.068 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.691 00:26:57.691 filename0: (groupid=0, jobs=1): err= 0: pid=2905397: Wed Jul 24 18:09:42 2024 00:26:57.691 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(127MiB/5006msec) 00:26:57.691 slat (nsec): min=6658, max=35762, avg=12526.38, stdev=3546.12 00:26:57.691 clat (usec): min=6062, max=90764, avg=14774.36, stdev=11957.35 00:26:57.691 lat (usec): min=6074, max=90777, avg=14786.89, stdev=11957.29 00:26:57.691 clat percentiles (usec): 00:26:57.691 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 7767], 20.00th=[ 9110], 00:26:57.691 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[12518], 00:26:57.691 | 70.00th=[13435], 80.00th=[14484], 90.00th=[16909], 95.00th=[51119], 00:26:57.691 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[90702], 00:26:57.691 | 99.99th=[90702] 00:26:57.691 bw ( KiB/s): min=18688, max=31488, per=33.92%, avg=25907.20, stdev=4334.06, samples=10 00:26:57.691 iops : min= 146, max= 246, avg=202.40, stdev=33.86, samples=10 00:26:57.691 lat (msec) : 10=34.98%, 20=56.26%, 50=2.66%, 100=6.11% 00:26:57.691 cpu : usr=91.37%, sys=8.19%, ctx=16, majf=0, minf=109 00:26:57.691 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 issued rwts: total=1015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.691 filename0: (groupid=0, jobs=1): err= 0: pid=2905398: Wed Jul 24 18:09:42 2024 00:26:57.691 read: IOPS=184, BW=23.0MiB/s (24.1MB/s)(115MiB/5005msec) 00:26:57.691 slat (nsec): min=6549, max=83004, avg=13667.24, stdev=4903.64 00:26:57.691 clat (usec): min=5322, max=57551, avg=16278.98, stdev=13945.11 00:26:57.691 lat (usec): min=5349, max=57564, avg=16292.64, stdev=13944.87 00:26:57.691 clat percentiles (usec): 00:26:57.691 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 7635], 20.00th=[ 8717], 00:26:57.691 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11863], 60.00th=[12649], 00:26:57.691 | 70.00th=[13566], 80.00th=[15139], 90.00th=[50070], 95.00th=[52691], 00:26:57.691 | 99.00th=[56361], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:26:57.691 | 99.99th=[57410] 00:26:57.691 bw ( KiB/s): min=17664, max=29184, per=30.77%, avg=23500.80, stdev=4542.43, samples=10 00:26:57.691 iops : min= 138, max= 228, avg=183.60, stdev=35.49, samples=10 00:26:57.691 lat (msec) : 10=34.09%, 20=53.20%, 50=2.82%, 100=9.88% 00:26:57.691 cpu : usr=91.57%, sys=8.01%, ctx=21, majf=0, minf=162 00:26:57.691 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 issued rwts: total=921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.691 filename0: (groupid=0, jobs=1): err= 0: pid=2905399: Wed Jul 24 18:09:42 2024 00:26:57.691 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(134MiB/5046msec) 00:26:57.691 slat (nsec): min=6700, max=46480, avg=12977.49, stdev=3783.11 00:26:57.691 clat (usec): min=5318, max=90800, avg=14025.08, stdev=11014.48 00:26:57.691 lat (usec): min=5330, max=90812, avg=14038.05, stdev=11014.48 00:26:57.691 clat percentiles (usec): 00:26:57.691 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 8717], 00:26:57.691 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11076], 60.00th=[12649], 00:26:57.691 | 70.00th=[13435], 80.00th=[14484], 90.00th=[16712], 95.00th=[49546], 00:26:57.691 | 99.00th=[53740], 99.50th=[54264], 99.90th=[56361], 99.95th=[90702], 00:26:57.691 | 99.99th=[90702] 00:26:57.691 bw ( KiB/s): min=21760, max=35328, per=35.94%, avg=27449.40, stdev=4649.55, samples=10 00:26:57.691 iops : min= 170, max= 276, avg=214.40, stdev=36.28, samples=10 00:26:57.691 lat (msec) : 10=38.60%, 20=54.05%, 50=2.79%, 100=4.56% 00:26:57.691 cpu : usr=90.37%, sys=9.20%, ctx=10, majf=0, minf=74 00:26:57.691 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.691 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:57.691 00:26:57.691 Run status group 0 (all jobs): 00:26:57.691 READ: bw=74.6MiB/s (78.2MB/s), 23.0MiB/s-26.6MiB/s (24.1MB/s-27.9MB/s), io=376MiB (395MB), run=5005-5046msec 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:57.691 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 bdev_null0 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 [2024-07-24 18:09:43.117820] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 bdev_null1 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 bdev_null2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.692 { 00:26:57.692 "params": { 00:26:57.692 "name": "Nvme$subsystem", 00:26:57.692 "trtype": "$TEST_TRANSPORT", 00:26:57.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.692 "adrfam": "ipv4", 00:26:57.692 "trsvcid": "$NVMF_PORT", 00:26:57.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.692 "hdgst": ${hdgst:-false}, 00:26:57.692 "ddgst": ${ddgst:-false} 00:26:57.692 }, 00:26:57.692 "method": "bdev_nvme_attach_controller" 00:26:57.692 } 00:26:57.692 EOF 00:26:57.692 )") 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.692 { 00:26:57.692 "params": { 00:26:57.692 "name": "Nvme$subsystem", 00:26:57.692 "trtype": "$TEST_TRANSPORT", 00:26:57.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.692 "adrfam": "ipv4", 00:26:57.692 "trsvcid": "$NVMF_PORT", 00:26:57.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.692 "hdgst": ${hdgst:-false}, 00:26:57.692 "ddgst": ${ddgst:-false} 00:26:57.692 }, 00:26:57.692 "method": "bdev_nvme_attach_controller" 00:26:57.692 } 00:26:57.692 EOF 00:26:57.692 )") 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:57.692 18:09:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.693 { 00:26:57.693 "params": { 00:26:57.693 "name": "Nvme$subsystem", 00:26:57.693 "trtype": "$TEST_TRANSPORT", 00:26:57.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.693 "adrfam": "ipv4", 00:26:57.693 "trsvcid": "$NVMF_PORT", 00:26:57.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.693 "hdgst": ${hdgst:-false}, 00:26:57.693 "ddgst": ${ddgst:-false} 00:26:57.693 }, 00:26:57.693 "method": "bdev_nvme_attach_controller" 00:26:57.693 } 00:26:57.693 EOF 00:26:57.693 )") 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:57.693 "params": { 00:26:57.693 "name": "Nvme0", 00:26:57.693 "trtype": "tcp", 00:26:57.693 "traddr": "10.0.0.2", 00:26:57.693 "adrfam": "ipv4", 00:26:57.693 "trsvcid": "4420", 00:26:57.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:57.693 "hdgst": false, 00:26:57.693 "ddgst": false 00:26:57.693 }, 00:26:57.693 "method": "bdev_nvme_attach_controller" 00:26:57.693 },{ 00:26:57.693 "params": { 00:26:57.693 "name": "Nvme1", 00:26:57.693 "trtype": "tcp", 00:26:57.693 "traddr": "10.0.0.2", 00:26:57.693 "adrfam": "ipv4", 00:26:57.693 "trsvcid": "4420", 00:26:57.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.693 "hdgst": false, 00:26:57.693 "ddgst": false 00:26:57.693 }, 00:26:57.693 "method": "bdev_nvme_attach_controller" 00:26:57.693 },{ 00:26:57.693 "params": { 00:26:57.693 "name": "Nvme2", 00:26:57.693 "trtype": "tcp", 00:26:57.693 "traddr": "10.0.0.2", 00:26:57.693 "adrfam": "ipv4", 00:26:57.693 "trsvcid": "4420", 00:26:57.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:57.693 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:57.693 "hdgst": false, 00:26:57.693 "ddgst": false 00:26:57.693 }, 00:26:57.693 "method": "bdev_nvme_attach_controller" 00:26:57.693 }' 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:57.693 18:09:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:57.693 ... 00:26:57.693 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:57.693 ... 00:26:57.693 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:57.693 ... 00:26:57.693 fio-3.35 00:26:57.693 Starting 24 threads 00:26:57.693 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.903 00:27:09.903 filename0: (groupid=0, jobs=1): err= 0: pid=2906264: Wed Jul 24 18:09:54 2024 00:27:09.903 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10017msec) 00:27:09.903 slat (usec): min=8, max=136, avg=41.12, stdev=22.98 00:27:09.903 clat (usec): min=32253, max=74832, avg=34215.31, stdev=2552.41 00:27:09.903 lat (usec): min=32318, max=74850, avg=34256.42, stdev=2548.77 00:27:09.903 clat percentiles (usec): 00:27:09.903 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:27:09.903 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.903 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.903 | 99.00th=[41157], 99.50th=[42206], 99.90th=[74974], 99.95th=[74974], 00:27:09.903 | 99.99th=[74974] 00:27:09.904 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1849.60, stdev=77.42, samples=20 00:27:09.904 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:27:09.904 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.904 cpu : usr=94.51%, sys=3.25%, ctx=223, majf=0, minf=40 00:27:09.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906265: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10017msec) 00:27:09.904 slat (usec): min=6, max=111, avg=42.76, stdev=15.61 00:27:09.904 clat (usec): min=26486, max=81732, avg=34122.22, stdev=2964.02 00:27:09.904 lat (usec): min=26498, max=81746, avg=34164.97, stdev=2962.70 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.904 | 99.00th=[41681], 99.50th=[42206], 99.90th=[81265], 99.95th=[81265], 00:27:09.904 | 99.99th=[81265] 00:27:09.904 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1848.80, stdev=96.63, samples=20 00:27:09.904 iops : min= 384, max= 480, avg=462.20, stdev=24.16, samples=20 00:27:09.904 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.904 cpu : usr=91.24%, sys=4.56%, ctx=231, majf=0, minf=35 00:27:09.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906266: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10022msec) 00:27:09.904 slat (usec): min=7, max=111, avg=28.25, stdev=19.01 00:27:09.904 clat (usec): min=9156, max=55843, avg=32209.57, stdev=4902.31 00:27:09.904 lat (usec): min=9212, max=55878, avg=32237.82, stdev=4895.20 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[15664], 5.00th=[21627], 10.00th=[22938], 20.00th=[33162], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.904 | 99.00th=[42206], 99.50th=[42206], 99.90th=[53216], 99.95th=[53216], 00:27:09.904 | 99.99th=[55837] 00:27:09.904 bw ( KiB/s): min= 1792, max= 2576, per=4.42%, avg=1972.80, stdev=242.27, samples=20 00:27:09.904 iops : min= 448, max= 644, avg=493.20, stdev=60.57, samples=20 00:27:09.904 lat (msec) : 10=0.14%, 20=2.20%, 50=97.33%, 100=0.32% 00:27:09.904 cpu : usr=95.63%, sys=2.49%, ctx=75, majf=0, minf=43 00:27:09.904 IO depths : 1=4.9%, 2=9.8%, 4=20.6%, 8=56.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=92.9%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906267: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10024msec) 00:27:09.904 slat (usec): min=9, max=125, avg=47.91, stdev=18.54 00:27:09.904 clat (usec): min=26688, max=55321, avg=34017.23, stdev=1591.86 00:27:09.904 lat (usec): min=26720, max=55357, avg=34065.14, stdev=1591.14 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.904 | 99.00th=[41157], 99.50th=[42206], 99.90th=[55313], 99.95th=[55313], 00:27:09.904 | 99.99th=[55313] 00:27:09.904 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.904 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.904 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.904 cpu : usr=91.36%, sys=4.28%, ctx=346, majf=0, minf=28 00:27:09.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906268: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10010msec) 00:27:09.904 slat (usec): min=8, max=114, avg=30.45, stdev=21.69 00:27:09.904 clat (usec): min=16894, max=81716, avg=34367.27, stdev=3314.93 00:27:09.904 lat (usec): min=16904, max=81767, avg=34397.72, stdev=3314.39 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[29230], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:27:09.904 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:27:09.904 | 99.00th=[42206], 99.50th=[55837], 99.90th=[81265], 99.95th=[81265], 00:27:09.904 | 99.99th=[81265] 00:27:09.904 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1849.05, stdev=64.05, samples=19 00:27:09.904 iops : min= 416, max= 480, avg=462.26, stdev=16.01, samples=19 00:27:09.904 lat (msec) : 20=0.17%, 50=99.22%, 100=0.60% 00:27:09.904 cpu : usr=92.42%, sys=3.89%, ctx=339, majf=0, minf=29 00:27:09.904 IO depths : 1=1.0%, 2=2.1%, 4=4.5%, 8=75.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=90.2%, 8=8.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906269: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10021msec) 00:27:09.904 slat (usec): min=5, max=118, avg=44.56, stdev=15.57 00:27:09.904 clat (usec): min=26397, max=85891, avg=34181.25, stdev=3195.69 00:27:09.904 lat (usec): min=26432, max=85908, avg=34225.81, stdev=3193.37 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.904 | 99.00th=[41681], 99.50th=[42206], 99.90th=[85459], 99.95th=[85459], 00:27:09.904 | 99.99th=[85459] 00:27:09.904 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1849.20, stdev=96.88, samples=20 00:27:09.904 iops : min= 384, max= 480, avg=462.30, stdev=24.22, samples=20 00:27:09.904 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.904 cpu : usr=98.27%, sys=1.32%, ctx=18, majf=0, minf=27 00:27:09.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906270: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=464, BW=1858KiB/s (1902kB/s)(18.2MiB/10025msec) 00:27:09.904 slat (usec): min=8, max=109, avg=31.73, stdev=17.02 00:27:09.904 clat (usec): min=14523, max=61810, avg=34175.04, stdev=2158.01 00:27:09.904 lat (usec): min=14533, max=61840, avg=34206.78, stdev=2155.61 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.904 | 99.00th=[40109], 99.50th=[44303], 99.90th=[61604], 99.95th=[61604], 00:27:09.904 | 99.99th=[61604] 00:27:09.904 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.904 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.904 lat (msec) : 20=0.13%, 50=99.40%, 100=0.47% 00:27:09.904 cpu : usr=98.10%, sys=1.50%, ctx=15, majf=0, minf=32 00:27:09.904 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.904 filename0: (groupid=0, jobs=1): err= 0: pid=2906271: Wed Jul 24 18:09:54 2024 00:27:09.904 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10011msec) 00:27:09.904 slat (usec): min=12, max=138, avg=45.12, stdev=17.94 00:27:09.904 clat (usec): min=26431, max=82871, avg=34090.56, stdev=2689.40 00:27:09.904 lat (usec): min=26469, max=82909, avg=34135.67, stdev=2689.17 00:27:09.904 clat percentiles (usec): 00:27:09.904 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.904 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.904 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.904 | 99.00th=[41681], 99.50th=[42206], 99.90th=[76022], 99.95th=[76022], 00:27:09.904 | 99.99th=[83362] 00:27:09.904 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1852.63, stdev=78.31, samples=19 00:27:09.904 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:27:09.904 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.904 cpu : usr=95.25%, sys=2.81%, ctx=175, majf=0, minf=31 00:27:09.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.904 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906272: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10013msec) 00:27:09.905 slat (nsec): min=8157, max=93505, avg=27068.16, stdev=12691.67 00:27:09.905 clat (usec): min=25573, max=84580, avg=34267.81, stdev=3126.56 00:27:09.905 lat (usec): min=25598, max=84615, avg=34294.87, stdev=3126.93 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.905 | 99.00th=[39584], 99.50th=[44303], 99.90th=[84411], 99.95th=[84411], 00:27:09.905 | 99.99th=[84411] 00:27:09.905 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1849.75, stdev=96.66, samples=20 00:27:09.905 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:27:09.905 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.905 cpu : usr=97.36%, sys=1.80%, ctx=159, majf=0, minf=31 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906273: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10024msec) 00:27:09.905 slat (usec): min=9, max=182, avg=45.60, stdev=25.23 00:27:09.905 clat (usec): min=26843, max=55194, avg=34034.01, stdev=1574.67 00:27:09.905 lat (usec): min=26864, max=55230, avg=34079.60, stdev=1574.53 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.905 | 99.00th=[41157], 99.50th=[42206], 99.90th=[55313], 99.95th=[55313], 00:27:09.905 | 99.99th=[55313] 00:27:09.905 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.905 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.905 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.905 cpu : usr=96.09%, sys=2.46%, ctx=77, majf=0, minf=32 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906274: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=464, BW=1858KiB/s (1902kB/s)(18.2MiB/10025msec) 00:27:09.905 slat (usec): min=8, max=116, avg=33.77, stdev=19.67 00:27:09.905 clat (usec): min=17964, max=79140, avg=34159.66, stdev=2089.13 00:27:09.905 lat (usec): min=18000, max=79175, avg=34193.43, stdev=2087.97 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.905 | 99.00th=[39584], 99.50th=[44303], 99.90th=[61604], 99.95th=[61604], 00:27:09.905 | 99.99th=[79168] 00:27:09.905 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.905 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.905 lat (msec) : 20=0.04%, 50=99.61%, 100=0.34% 00:27:09.905 cpu : usr=98.11%, sys=1.49%, ctx=15, majf=0, minf=25 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906275: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=462, BW=1850KiB/s (1895kB/s)(18.1MiB/10009msec) 00:27:09.905 slat (usec): min=8, max=108, avg=44.66, stdev=20.64 00:27:09.905 clat (usec): min=26436, max=81454, avg=34158.94, stdev=3139.50 00:27:09.905 lat (usec): min=26465, max=81494, avg=34203.61, stdev=3139.04 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.905 | 99.00th=[41681], 99.50th=[56361], 99.90th=[81265], 99.95th=[81265], 00:27:09.905 | 99.99th=[81265] 00:27:09.905 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1848.21, stdev=75.48, samples=19 00:27:09.905 iops : min= 416, max= 480, avg=462.05, stdev=18.87, samples=19 00:27:09.905 lat (msec) : 50=99.44%, 100=0.56% 00:27:09.905 cpu : usr=97.80%, sys=1.79%, ctx=28, majf=0, minf=65 00:27:09.905 IO depths : 1=5.6%, 2=11.7%, 4=24.7%, 8=51.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906276: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:09.905 slat (usec): min=4, max=101, avg=30.51, stdev=16.68 00:27:09.905 clat (usec): min=24707, max=73348, avg=34224.83, stdev=2271.89 00:27:09.905 lat (usec): min=24760, max=73362, avg=34255.33, stdev=2269.86 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.905 | 99.00th=[40109], 99.50th=[43779], 99.90th=[67634], 99.95th=[72877], 00:27:09.905 | 99.99th=[72877] 00:27:09.905 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1849.75, stdev=77.04, samples=20 00:27:09.905 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:27:09.905 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.905 cpu : usr=95.24%, sys=2.74%, ctx=86, majf=0, minf=31 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906277: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10010msec) 00:27:09.905 slat (usec): min=14, max=105, avg=45.63, stdev=13.80 00:27:09.905 clat (usec): min=26423, max=75185, avg=34111.58, stdev=2597.60 00:27:09.905 lat (usec): min=26466, max=75233, avg=34157.22, stdev=2596.86 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.905 | 99.00th=[41681], 99.50th=[42206], 99.90th=[74974], 99.95th=[74974], 00:27:09.905 | 99.99th=[74974] 00:27:09.905 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1852.79, stdev=77.91, samples=19 00:27:09.905 iops : min= 416, max= 480, avg=463.16, stdev=19.58, samples=19 00:27:09.905 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.905 cpu : usr=98.04%, sys=1.53%, ctx=20, majf=0, minf=26 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906278: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=466, BW=1866KiB/s (1911kB/s)(18.2MiB/10014msec) 00:27:09.905 slat (usec): min=7, max=121, avg=24.08, stdev=21.56 00:27:09.905 clat (usec): min=10246, max=51273, avg=34081.38, stdev=2328.28 00:27:09.905 lat (usec): min=10254, max=51292, avg=34105.45, stdev=2326.43 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[30016], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:27:09.905 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.905 | 99.00th=[39584], 99.50th=[44303], 99.90th=[51119], 99.95th=[51119], 00:27:09.905 | 99.99th=[51119] 00:27:09.905 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1862.40, stdev=65.33, samples=20 00:27:09.905 iops : min= 448, max= 480, avg=465.60, stdev=16.33, samples=20 00:27:09.905 lat (msec) : 20=0.68%, 50=98.97%, 100=0.34% 00:27:09.905 cpu : usr=95.04%, sys=2.86%, ctx=175, majf=0, minf=39 00:27:09.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.905 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.905 filename1: (groupid=0, jobs=1): err= 0: pid=2906279: Wed Jul 24 18:09:54 2024 00:27:09.905 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10019msec) 00:27:09.905 slat (usec): min=14, max=139, avg=47.30, stdev=18.77 00:27:09.905 clat (usec): min=26451, max=83905, avg=34110.32, stdev=3092.16 00:27:09.905 lat (usec): min=26484, max=83934, avg=34157.63, stdev=3091.27 00:27:09.905 clat percentiles (usec): 00:27:09.905 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.905 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.905 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.905 | 99.00th=[41681], 99.50th=[42206], 99.90th=[83362], 99.95th=[83362], 00:27:09.905 | 99.99th=[84411] 00:27:09.905 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1849.60, stdev=97.17, samples=20 00:27:09.906 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=98.21%, sys=1.36%, ctx=18, majf=0, minf=26 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906280: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10017msec) 00:27:09.906 slat (usec): min=8, max=183, avg=39.06, stdev=28.60 00:27:09.906 clat (usec): min=32585, max=74788, avg=34166.33, stdev=2550.55 00:27:09.906 lat (usec): min=32627, max=74806, avg=34205.40, stdev=2548.81 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[41157], 99.50th=[42206], 99.90th=[74974], 99.95th=[74974], 00:27:09.906 | 99.99th=[74974] 00:27:09.906 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1849.60, stdev=77.42, samples=20 00:27:09.906 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=97.63%, sys=1.59%, ctx=55, majf=0, minf=29 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906281: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10013msec) 00:27:09.906 slat (usec): min=8, max=120, avg=43.52, stdev=15.92 00:27:09.906 clat (usec): min=26376, max=78033, avg=34178.00, stdev=2803.25 00:27:09.906 lat (usec): min=26412, max=78048, avg=34221.52, stdev=2801.59 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[41681], 99.50th=[42206], 99.90th=[78119], 99.95th=[78119], 00:27:09.906 | 99.99th=[78119] 00:27:09.906 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1849.60, stdev=74.94, samples=20 00:27:09.906 iops : min= 416, max= 480, avg=462.40, stdev=18.73, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=96.54%, sys=2.15%, ctx=111, majf=0, minf=34 00:27:09.906 IO depths : 1=2.0%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906282: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=464, BW=1860KiB/s (1904kB/s)(18.2MiB/10015msec) 00:27:09.906 slat (usec): min=5, max=110, avg=33.24, stdev=19.51 00:27:09.906 clat (usec): min=17589, max=56248, avg=34130.05, stdev=1759.04 00:27:09.906 lat (usec): min=17646, max=56285, avg=34163.28, stdev=1756.19 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[40109], 99.50th=[43779], 99.90th=[55837], 99.95th=[55837], 00:27:09.906 | 99.99th=[56361] 00:27:09.906 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1856.00, stdev=65.66, samples=20 00:27:09.906 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:27:09.906 lat (msec) : 20=0.19%, 50=99.46%, 100=0.34% 00:27:09.906 cpu : usr=98.09%, sys=1.49%, ctx=15, majf=0, minf=24 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906283: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=464, BW=1858KiB/s (1902kB/s)(18.2MiB/10025msec) 00:27:09.906 slat (nsec): min=8309, max=94270, avg=25621.70, stdev=10870.74 00:27:09.906 clat (usec): min=17411, max=79108, avg=34232.46, stdev=2086.51 00:27:09.906 lat (usec): min=17470, max=79138, avg=34258.09, stdev=2085.40 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[40109], 99.50th=[44303], 99.90th=[61604], 99.95th=[61604], 00:27:09.906 | 99.99th=[79168] 00:27:09.906 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.906 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.906 lat (msec) : 20=0.04%, 50=99.61%, 100=0.34% 00:27:09.906 cpu : usr=98.11%, sys=1.49%, ctx=17, majf=0, minf=27 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906284: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10024msec) 00:27:09.906 slat (usec): min=10, max=111, avg=44.93, stdev=17.87 00:27:09.906 clat (usec): min=26416, max=55184, avg=34090.18, stdev=1581.44 00:27:09.906 lat (usec): min=26451, max=55231, avg=34135.10, stdev=1578.92 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[41681], 99.50th=[42206], 99.90th=[55313], 99.95th=[55313], 00:27:09.906 | 99.99th=[55313] 00:27:09.906 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1856.00, stdev=77.69, samples=20 00:27:09.906 iops : min= 416, max= 480, avg=464.00, stdev=19.42, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=98.19%, sys=1.36%, ctx=21, majf=0, minf=34 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906285: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10016msec) 00:27:09.906 slat (nsec): min=8137, max=98199, avg=30697.36, stdev=16923.78 00:27:09.906 clat (usec): min=25906, max=79746, avg=34241.00, stdev=2850.34 00:27:09.906 lat (usec): min=25917, max=79791, avg=34271.70, stdev=2849.86 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:27:09.906 | 99.00th=[40109], 99.50th=[43779], 99.90th=[79168], 99.95th=[79168], 00:27:09.906 | 99.99th=[80217] 00:27:09.906 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1849.60, stdev=77.42, samples=20 00:27:09.906 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=97.64%, sys=1.77%, ctx=103, majf=0, minf=33 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906286: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10018msec) 00:27:09.906 slat (usec): min=14, max=104, avg=49.24, stdev=16.42 00:27:09.906 clat (usec): min=26593, max=83049, avg=34106.50, stdev=3031.03 00:27:09.906 lat (usec): min=26623, max=83093, avg=34155.74, stdev=3030.08 00:27:09.906 clat percentiles (usec): 00:27:09.906 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.906 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.906 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.906 | 99.00th=[41157], 99.50th=[42206], 99.90th=[83362], 99.95th=[83362], 00:27:09.906 | 99.99th=[83362] 00:27:09.906 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1849.75, stdev=96.66, samples=20 00:27:09.906 iops : min= 384, max= 480, avg=462.40, stdev=24.29, samples=20 00:27:09.906 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.906 cpu : usr=98.44%, sys=1.16%, ctx=16, majf=0, minf=28 00:27:09.906 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.906 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.906 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.906 filename2: (groupid=0, jobs=1): err= 0: pid=2906287: Wed Jul 24 18:09:54 2024 00:27:09.906 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10012msec) 00:27:09.906 slat (nsec): min=4154, max=87953, avg=42541.37, stdev=12372.94 00:27:09.907 clat (usec): min=26364, max=76725, avg=34134.30, stdev=2689.12 00:27:09.907 lat (usec): min=26398, max=76748, avg=34176.84, stdev=2687.43 00:27:09.907 clat percentiles (usec): 00:27:09.907 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:27:09.907 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:27:09.907 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:27:09.907 | 99.00th=[41681], 99.50th=[42206], 99.90th=[77071], 99.95th=[77071], 00:27:09.907 | 99.99th=[77071] 00:27:09.907 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1849.75, stdev=77.04, samples=20 00:27:09.907 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:27:09.907 lat (msec) : 50=99.66%, 100=0.34% 00:27:09.907 cpu : usr=94.72%, sys=2.92%, ctx=140, majf=0, minf=31 00:27:09.907 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:09.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.907 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.907 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.907 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:09.907 00:27:09.907 Run status group 0 (all jobs): 00:27:09.907 READ: bw=43.6MiB/s (45.7MB/s), 1850KiB/s-1975KiB/s (1895kB/s-2022kB/s), io=437MiB (458MB), run=10009-10025msec 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 bdev_null0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 [2024-07-24 18:09:54.955791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 bdev_null1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:27:09.907 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:09.907 { 00:27:09.907 "params": { 00:27:09.907 "name": "Nvme$subsystem", 00:27:09.907 "trtype": "$TEST_TRANSPORT", 00:27:09.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.908 "adrfam": "ipv4", 00:27:09.908 "trsvcid": "$NVMF_PORT", 00:27:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.908 "hdgst": ${hdgst:-false}, 00:27:09.908 "ddgst": ${ddgst:-false} 00:27:09.908 }, 00:27:09.908 "method": "bdev_nvme_attach_controller" 00:27:09.908 } 00:27:09.908 EOF 00:27:09.908 )") 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:09.908 { 00:27:09.908 "params": { 00:27:09.908 "name": "Nvme$subsystem", 00:27:09.908 "trtype": "$TEST_TRANSPORT", 00:27:09.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.908 "adrfam": "ipv4", 00:27:09.908 "trsvcid": "$NVMF_PORT", 00:27:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.908 "hdgst": ${hdgst:-false}, 00:27:09.908 "ddgst": ${ddgst:-false} 00:27:09.908 }, 00:27:09.908 "method": "bdev_nvme_attach_controller" 00:27:09.908 } 00:27:09.908 EOF 00:27:09.908 )") 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.908 18:09:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:09.908 "params": { 00:27:09.908 "name": "Nvme0", 00:27:09.908 "trtype": "tcp", 00:27:09.908 "traddr": "10.0.0.2", 00:27:09.908 "adrfam": "ipv4", 00:27:09.908 "trsvcid": "4420", 00:27:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.908 "hdgst": false, 00:27:09.908 "ddgst": false 00:27:09.908 }, 00:27:09.908 "method": "bdev_nvme_attach_controller" 00:27:09.908 },{ 00:27:09.908 "params": { 00:27:09.908 "name": "Nvme1", 00:27:09.908 "trtype": "tcp", 00:27:09.908 "traddr": "10.0.0.2", 00:27:09.908 "adrfam": "ipv4", 00:27:09.908 "trsvcid": "4420", 00:27:09.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:09.908 "hdgst": false, 00:27:09.908 "ddgst": false 00:27:09.908 }, 00:27:09.908 "method": "bdev_nvme_attach_controller" 00:27:09.908 }' 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:09.908 18:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.908 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:09.908 ... 00:27:09.908 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:09.908 ... 00:27:09.908 fio-3.35 00:27:09.908 Starting 4 threads 00:27:09.908 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.176 00:27:15.176 filename0: (groupid=0, jobs=1): err= 0: pid=2907665: Wed Jul 24 18:10:01 2024 00:27:15.176 read: IOPS=1911, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5002msec) 00:27:15.176 slat (nsec): min=6554, max=62603, avg=12112.20, stdev=6255.18 00:27:15.176 clat (usec): min=1139, max=7779, avg=4146.10, stdev=628.06 00:27:15.176 lat (usec): min=1152, max=7794, avg=4158.21, stdev=628.30 00:27:15.176 clat percentiles (usec): 00:27:15.176 | 1.00th=[ 2704], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3785], 00:27:15.176 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:27:15.176 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5342], 00:27:15.176 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7504], 00:27:15.176 | 99.99th=[ 7767] 00:27:15.176 bw ( KiB/s): min=14765, max=15904, per=25.97%, avg=15292.50, stdev=360.93, samples=10 00:27:15.176 iops : min= 1845, max= 1988, avg=1911.50, stdev=45.22, samples=10 00:27:15.176 lat (msec) : 2=0.09%, 4=38.65%, 10=61.25% 00:27:15.176 cpu : usr=93.64%, sys=5.88%, ctx=8, majf=0, minf=33 00:27:15.176 IO depths : 1=0.1%, 2=8.9%, 4=63.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 issued rwts: total=9559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:15.176 filename0: (groupid=0, jobs=1): err= 0: pid=2907666: Wed Jul 24 18:10:01 2024 00:27:15.176 read: IOPS=1816, BW=14.2MiB/s (14.9MB/s)(71.5MiB/5041msec) 00:27:15.176 slat (nsec): min=5195, max=59607, avg=15975.97, stdev=8005.37 00:27:15.176 clat (usec): min=968, max=42152, avg=4335.06, stdev=1080.93 00:27:15.176 lat (usec): min=988, max=42177, avg=4351.04, stdev=1080.30 00:27:15.176 clat percentiles (usec): 00:27:15.176 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3884], 00:27:15.176 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:15.176 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5407], 95.00th=[ 6128], 00:27:15.176 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 8029], 00:27:15.176 | 99.99th=[42206] 00:27:15.176 bw ( KiB/s): min=13936, max=15920, per=24.86%, avg=14643.20, stdev=543.20, samples=10 00:27:15.176 iops : min= 1742, max= 1990, avg=1830.40, stdev=67.90, samples=10 00:27:15.176 lat (usec) : 1000=0.01% 00:27:15.176 lat (msec) : 2=0.11%, 4=29.27%, 10=70.57%, 50=0.04% 00:27:15.176 cpu : usr=94.94%, sys=4.48%, ctx=17, majf=0, minf=64 00:27:15.176 IO depths : 1=0.1%, 2=5.7%, 4=66.5%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 issued rwts: total=9156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:15.176 filename1: (groupid=0, jobs=1): err= 0: pid=2907667: Wed Jul 24 18:10:01 2024 00:27:15.176 read: IOPS=1811, BW=14.2MiB/s (14.8MB/s)(70.8MiB/5002msec) 00:27:15.176 slat (nsec): min=6850, max=62484, avg=13358.45, stdev=7303.38 00:27:15.176 clat (usec): min=774, max=10323, avg=4372.47, stdev=760.81 00:27:15.176 lat (usec): min=788, max=10343, avg=4385.83, stdev=760.60 00:27:15.176 clat percentiles (usec): 00:27:15.176 | 1.00th=[ 2966], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 3949], 00:27:15.176 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:27:15.176 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 6128], 00:27:15.176 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 8848], 99.95th=[10159], 00:27:15.176 | 99.99th=[10290] 00:27:15.176 bw ( KiB/s): min=14000, max=15488, per=24.53%, avg=14449.78, stdev=496.74, samples=9 00:27:15.176 iops : min= 1750, max= 1936, avg=1806.22, stdev=62.09, samples=9 00:27:15.176 lat (usec) : 1000=0.02% 00:27:15.176 lat (msec) : 2=0.15%, 4=26.35%, 10=73.42%, 20=0.06% 00:27:15.176 cpu : usr=93.70%, sys=5.82%, ctx=6, majf=0, minf=37 00:27:15.176 IO depths : 1=0.4%, 2=7.4%, 4=64.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 issued rwts: total=9060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:15.176 filename1: (groupid=0, jobs=1): err= 0: pid=2907668: Wed Jul 24 18:10:01 2024 00:27:15.176 read: IOPS=1865, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5003msec) 00:27:15.176 slat (nsec): min=7126, max=59293, avg=13263.42, stdev=6687.79 00:27:15.176 clat (usec): min=715, max=9609, avg=4246.28, stdev=670.45 00:27:15.176 lat (usec): min=733, max=9628, avg=4259.55, stdev=670.27 00:27:15.176 clat percentiles (usec): 00:27:15.176 | 1.00th=[ 2868], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3851], 00:27:15.176 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:27:15.176 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5800], 00:27:15.176 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 8029], 00:27:15.176 | 99.99th=[ 9634] 00:27:15.176 bw ( KiB/s): min=14272, max=15760, per=25.34%, avg=14926.40, stdev=458.51, samples=10 00:27:15.176 iops : min= 1784, max= 1970, avg=1865.80, stdev=57.31, samples=10 00:27:15.176 lat (usec) : 750=0.01% 00:27:15.176 lat (msec) : 2=0.03%, 4=32.91%, 10=67.05% 00:27:15.176 cpu : usr=94.20%, sys=5.30%, ctx=12, majf=0, minf=17 00:27:15.176 IO depths : 1=0.1%, 2=4.4%, 4=67.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.176 issued rwts: total=9334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:15.176 00:27:15.176 Run status group 0 (all jobs): 00:27:15.176 READ: bw=57.5MiB/s (60.3MB/s), 14.2MiB/s-14.9MiB/s (14.8MB/s-15.7MB/s), io=290MiB (304MB), run=5002-5041msec 00:27:15.176 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:15.176 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:15.176 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.177 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.438 00:27:15.438 real 0m24.544s 00:27:15.438 user 4m29.004s 00:27:15.438 sys 0m8.744s 00:27:15.438 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 ************************************ 00:27:15.438 END TEST fio_dif_rand_params 00:27:15.438 ************************************ 00:27:15.438 18:10:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:15.438 18:10:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:15.438 18:10:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 ************************************ 00:27:15.438 START TEST fio_dif_digest 00:27:15.438 ************************************ 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 bdev_null0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:15.438 [2024-07-24 18:10:01.527739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:15.438 { 00:27:15.438 "params": { 00:27:15.438 "name": "Nvme$subsystem", 00:27:15.438 "trtype": "$TEST_TRANSPORT", 00:27:15.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.438 "adrfam": "ipv4", 00:27:15.438 "trsvcid": "$NVMF_PORT", 00:27:15.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.438 "hdgst": ${hdgst:-false}, 00:27:15.438 "ddgst": ${ddgst:-false} 00:27:15.438 }, 00:27:15.438 "method": "bdev_nvme_attach_controller" 00:27:15.438 } 00:27:15.438 EOF 00:27:15.438 )") 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:27:15.438 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:15.439 "params": { 00:27:15.439 "name": "Nvme0", 00:27:15.439 "trtype": "tcp", 00:27:15.439 "traddr": "10.0.0.2", 00:27:15.439 "adrfam": "ipv4", 00:27:15.439 "trsvcid": "4420", 00:27:15.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:15.439 "hdgst": true, 00:27:15.439 "ddgst": true 00:27:15.439 }, 00:27:15.439 "method": "bdev_nvme_attach_controller" 00:27:15.439 }' 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:15.439 18:10:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.699 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:15.699 ... 00:27:15.699 fio-3.35 00:27:15.699 Starting 3 threads 00:27:15.699 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.909 00:27:27.909 filename0: (groupid=0, jobs=1): err= 0: pid=2908418: Wed Jul 24 18:10:12 2024 00:27:27.909 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(239MiB/10004msec) 00:27:27.909 slat (nsec): min=5018, max=37680, avg=14517.09, stdev=2021.39 00:27:27.909 clat (usec): min=9454, max=59649, avg=15710.31, stdev=2631.92 00:27:27.909 lat (usec): min=9468, max=59664, avg=15724.83, stdev=2631.88 00:27:27.909 clat percentiles (usec): 00:27:27.909 | 1.00th=[11338], 5.00th=[13698], 10.00th=[14222], 20.00th=[14746], 00:27:27.909 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:27:27.909 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:27:27.909 | 99.00th=[18220], 99.50th=[19006], 99.90th=[58459], 99.95th=[59507], 00:27:27.909 | 99.99th=[59507] 00:27:27.909 bw ( KiB/s): min=21504, max=25600, per=32.15%, avg=24373.89, stdev=940.91, samples=19 00:27:27.909 iops : min= 168, max= 200, avg=190.42, stdev= 7.35, samples=19 00:27:27.909 lat (msec) : 10=0.10%, 20=99.42%, 50=0.16%, 100=0.31% 00:27:27.909 cpu : usr=91.90%, sys=7.44%, ctx=93, majf=0, minf=151 00:27:27.909 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:27.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.909 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.909 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:27.909 filename0: (groupid=0, jobs=1): err= 0: pid=2908419: Wed Jul 24 18:10:12 2024 00:27:27.909 read: IOPS=204, BW=25.5MiB/s (26.7MB/s)(256MiB/10047msec) 00:27:27.909 slat (nsec): min=4596, max=41931, avg=14970.78, stdev=2884.78 00:27:27.909 clat (usec): min=8821, max=55613, avg=14663.30, stdev=2336.74 00:27:27.909 lat (usec): min=8835, max=55628, avg=14678.27, stdev=2336.72 00:27:27.909 clat percentiles (usec): 00:27:27.909 | 1.00th=[10159], 5.00th=[12649], 10.00th=[13173], 20.00th=[13698], 00:27:27.909 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:27:27.910 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:27:27.910 | 99.00th=[17695], 99.50th=[17957], 99.90th=[55837], 99.95th=[55837], 00:27:27.910 | 99.99th=[55837] 00:27:27.910 bw ( KiB/s): min=24320, max=27648, per=34.58%, avg=26217.05, stdev=668.35, samples=20 00:27:27.910 iops : min= 190, max= 216, avg=204.80, stdev= 5.21, samples=20 00:27:27.910 lat (msec) : 10=0.88%, 20=98.73%, 50=0.20%, 100=0.20% 00:27:27.910 cpu : usr=90.63%, sys=8.00%, ctx=311, majf=0, minf=107 00:27:27.910 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:27.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.910 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.910 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:27.910 filename0: (groupid=0, jobs=1): err= 0: pid=2908420: Wed Jul 24 18:10:12 2024 00:27:27.910 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10007msec) 00:27:27.910 slat (nsec): min=4508, max=28939, avg=14192.55, stdev=1534.29 00:27:27.910 clat (usec): min=8252, max=59033, avg=15042.55, stdev=2624.18 00:27:27.910 lat (usec): min=8265, max=59047, avg=15056.74, stdev=2624.16 00:27:27.910 clat percentiles (usec): 00:27:27.910 | 1.00th=[10421], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:27:27.910 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:27:27.910 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16319], 95.00th=[16909], 00:27:27.910 | 99.00th=[17695], 99.50th=[18482], 99.90th=[58459], 99.95th=[58983], 00:27:27.910 | 99.99th=[58983] 00:27:27.910 bw ( KiB/s): min=23040, max=27136, per=33.61%, avg=25484.80, stdev=977.38, samples=20 00:27:27.910 iops : min= 180, max= 212, avg=199.10, stdev= 7.64, samples=20 00:27:27.910 lat (msec) : 10=0.65%, 20=98.90%, 50=0.15%, 100=0.30% 00:27:27.910 cpu : usr=92.87%, sys=6.63%, ctx=49, majf=0, minf=107 00:27:27.910 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:27.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.910 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.910 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:27.910 00:27:27.910 Run status group 0 (all jobs): 00:27:27.910 READ: bw=74.0MiB/s (77.6MB/s), 23.8MiB/s-25.5MiB/s (25.0MB/s-26.7MB/s), io=744MiB (780MB), run=10004-10047msec 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.910 00:27:27.910 real 0m11.248s 00:27:27.910 user 0m29.079s 00:27:27.910 sys 0m2.501s 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.910 18:10:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.910 ************************************ 00:27:27.910 END TEST fio_dif_digest 00:27:27.910 ************************************ 00:27:27.910 18:10:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:27.910 18:10:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.910 rmmod nvme_tcp 00:27:27.910 rmmod nvme_fabrics 00:27:27.910 rmmod nvme_keyring 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2902372 ']' 00:27:27.910 18:10:12 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2902372 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2902372 ']' 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2902372 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2902372 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2902372' 00:27:27.910 killing process with pid 2902372 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2902372 00:27:27.910 18:10:12 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2902372 00:27:27.910 18:10:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:27.910 18:10:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.910 Waiting for block devices as requested 00:27:28.169 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:28.169 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:28.169 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:28.427 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:28.427 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:28.427 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:28.427 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:28.427 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:28.686 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:28.686 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:28.945 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:28.945 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:28.945 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:28.945 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:29.205 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:29.205 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:29.205 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:29.466 18:10:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.466 18:10:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.466 18:10:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.466 18:10:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.466 18:10:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.466 18:10:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:29.466 18:10:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.377 18:10:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.377 00:27:31.377 real 1m7.077s 00:27:31.377 user 6m25.235s 00:27:31.377 sys 0m21.030s 00:27:31.377 18:10:17 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.377 18:10:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.377 ************************************ 00:27:31.377 END TEST nvmf_dif 00:27:31.377 ************************************ 00:27:31.377 18:10:17 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:31.377 18:10:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.377 18:10:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.377 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:31.377 ************************************ 00:27:31.378 START TEST nvmf_abort_qd_sizes 00:27:31.378 ************************************ 00:27:31.378 18:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:31.636 * Looking for test storage... 00:27:31.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.636 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.637 18:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:33.542 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:33.542 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:33.542 Found net devices under 0000:09:00.0: cvl_0_0 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:33.542 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:33.543 Found net devices under 0000:09:00.1: cvl_0_1 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:33.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:33.543 00:27:33.543 --- 10.0.0.2 ping statistics --- 00:27:33.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.543 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:27:33.543 00:27:33.543 --- 10.0.0.1 ping statistics --- 00:27:33.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.543 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:33.543 18:10:19 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:34.477 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:34.478 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:34.737 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:34.737 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:35.678 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2913335 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2913335 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2913335 ']' 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.939 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:35.939 [2024-07-24 18:10:22.088983] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:27:35.939 [2024-07-24 18:10:22.089054] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.939 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.939 [2024-07-24 18:10:22.155511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.198 [2024-07-24 18:10:22.270882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.198 [2024-07-24 18:10:22.270933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.198 [2024-07-24 18:10:22.270962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.198 [2024-07-24 18:10:22.270974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.198 [2024-07-24 18:10:22.270984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.198 [2024-07-24 18:10:22.272123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.198 [2024-07-24 18:10:22.272148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.198 [2024-07-24 18:10:22.272201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.198 [2024-07-24 18:10:22.272204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.198 18:10:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:36.198 ************************************ 00:27:36.198 START TEST spdk_target_abort 00:27:36.198 ************************************ 00:27:36.198 18:10:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:36.198 18:10:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:36.198 18:10:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:27:36.198 18:10:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.198 18:10:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.558 spdk_targetn1 00:27:39.558 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.558 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.559 [2024-07-24 18:10:25.306881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:39.559 [2024-07-24 18:10:25.339127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:39.559 18:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.559 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.849 Initializing NVMe Controllers 00:27:42.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:42.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:42.849 Initialization complete. Launching workers. 00:27:42.849 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9841, failed: 0 00:27:42.849 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1144, failed to submit 8697 00:27:42.849 success 766, unsuccess 378, failed 0 00:27:42.849 18:10:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:42.849 18:10:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:42.849 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.134 Initializing NVMe Controllers 00:27:46.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:46.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:46.134 Initialization complete. Launching workers. 00:27:46.134 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9016, failed: 0 00:27:46.134 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1185, failed to submit 7831 00:27:46.134 success 276, unsuccess 909, failed 0 00:27:46.134 18:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:46.134 18:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:46.134 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.426 Initializing NVMe Controllers 00:27:49.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:49.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:49.426 Initialization complete. Launching workers. 00:27:49.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31297, failed: 0 00:27:49.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2667, failed to submit 28630 00:27:49.426 success 552, unsuccess 2115, failed 0 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.426 18:10:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2913335 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2913335 ']' 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2913335 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2913335 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2913335' 00:27:50.361 killing process with pid 2913335 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2913335 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2913335 00:27:50.361 00:27:50.361 real 0m14.149s 00:27:50.361 user 0m53.291s 00:27:50.361 sys 0m2.698s 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.361 18:10:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:50.361 ************************************ 00:27:50.361 END TEST spdk_target_abort 00:27:50.361 ************************************ 00:27:50.620 18:10:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:50.620 18:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:50.620 18:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.620 18:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:50.620 ************************************ 00:27:50.620 START TEST kernel_target_abort 00:27:50.620 ************************************ 00:27:50.620 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:50.620 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:50.621 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:51.557 Waiting for block devices as requested 00:27:51.557 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:51.815 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:51.815 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:51.815 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:52.072 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:52.072 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:52.072 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:52.072 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:52.331 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:52.331 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:52.590 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:52.590 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:52.591 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:52.591 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:52.849 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:52.849 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:52.849 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:53.120 No valid GPT data, bailing 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:53.120 00:27:53.120 Discovery Log Number of Records 2, Generation counter 2 00:27:53.120 =====Discovery Log Entry 0====== 00:27:53.120 trtype: tcp 00:27:53.120 adrfam: ipv4 00:27:53.120 subtype: current discovery subsystem 00:27:53.120 treq: not specified, sq flow control disable supported 00:27:53.120 portid: 1 00:27:53.120 trsvcid: 4420 00:27:53.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:53.120 traddr: 10.0.0.1 00:27:53.120 eflags: none 00:27:53.120 sectype: none 00:27:53.120 =====Discovery Log Entry 1====== 00:27:53.120 trtype: tcp 00:27:53.120 adrfam: ipv4 00:27:53.120 subtype: nvme subsystem 00:27:53.120 treq: not specified, sq flow control disable supported 00:27:53.120 portid: 1 00:27:53.120 trsvcid: 4420 00:27:53.120 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:53.120 traddr: 10.0.0.1 00:27:53.120 eflags: none 00:27:53.120 sectype: none 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:53.120 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:53.120 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.407 Initializing NVMe Controllers 00:27:56.407 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:56.407 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:56.407 Initialization complete. Launching workers. 00:27:56.407 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33156, failed: 0 00:27:56.407 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33156, failed to submit 0 00:27:56.407 success 0, unsuccess 33156, failed 0 00:27:56.407 18:10:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:56.408 18:10:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:56.408 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.694 Initializing NVMe Controllers 00:27:59.694 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:59.694 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:59.694 Initialization complete. Launching workers. 00:27:59.694 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64253, failed: 0 00:27:59.694 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16206, failed to submit 48047 00:27:59.694 success 0, unsuccess 16206, failed 0 00:27:59.694 18:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:59.694 18:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:59.694 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.981 Initializing NVMe Controllers 00:28:02.981 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:02.981 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:02.981 Initialization complete. Launching workers. 00:28:02.981 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62552, failed: 0 00:28:02.981 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15630, failed to submit 46922 00:28:02.981 success 0, unsuccess 15630, failed 0 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:02.981 18:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.918 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:03.918 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:03.918 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:04.858 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:04.858 00:28:04.858 real 0m14.404s 00:28:04.858 user 0m5.176s 00:28:04.858 sys 0m3.439s 00:28:04.858 18:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.858 18:10:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:04.858 ************************************ 00:28:04.858 END TEST kernel_target_abort 00:28:04.858 ************************************ 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.858 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.858 rmmod nvme_tcp 00:28:04.858 rmmod nvme_fabrics 00:28:04.858 rmmod nvme_keyring 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2913335 ']' 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2913335 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2913335 ']' 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2913335 00:28:05.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2913335) - No such process 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2913335 is not found' 00:28:05.116 Process with pid 2913335 is not found 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:05.116 18:10:51 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.088 Waiting for block devices as requested 00:28:06.088 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:06.088 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:06.088 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:06.348 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:06.348 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:06.348 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:06.348 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:06.609 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:06.609 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:06.609 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:06.868 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:06.868 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:06.868 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:07.126 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:07.126 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:07.126 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:07.126 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:07.388 18:10:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.298 18:10:55 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.298 00:28:09.298 real 0m37.864s 00:28:09.298 user 1m0.483s 00:28:09.298 sys 0m9.358s 00:28:09.298 18:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:09.298 18:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:09.298 ************************************ 00:28:09.298 END TEST nvmf_abort_qd_sizes 00:28:09.298 ************************************ 00:28:09.298 18:10:55 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:09.298 18:10:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:09.298 18:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.298 18:10:55 -- common/autotest_common.sh@10 -- # set +x 00:28:09.298 ************************************ 00:28:09.298 START TEST keyring_file 00:28:09.298 ************************************ 00:28:09.298 18:10:55 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:09.557 * Looking for test storage... 00:28:09.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.557 18:10:55 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.557 18:10:55 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.557 18:10:55 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.557 18:10:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.557 18:10:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.557 18:10:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.557 18:10:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:09.557 18:10:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ugIglZV5gT 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ugIglZV5gT 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ugIglZV5gT 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ugIglZV5gT 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LZgMvmFZW3 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:09.557 18:10:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LZgMvmFZW3 00:28:09.557 18:10:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LZgMvmFZW3 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LZgMvmFZW3 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=2919100 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:09.557 18:10:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2919100 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2919100 ']' 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.557 18:10:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:09.557 [2024-07-24 18:10:55.744592] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:28:09.557 [2024-07-24 18:10:55.744682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919100 ] 00:28:09.557 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.557 [2024-07-24 18:10:55.807254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.816 [2024-07-24 18:10:55.925460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:10.756 18:10:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:10.756 [2024-07-24 18:10:56.679499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.756 null0 00:28:10.756 [2024-07-24 18:10:56.711551] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:10.756 [2024-07-24 18:10:56.711960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:10.756 [2024-07-24 18:10:56.719548] tcp.c:3729:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.756 18:10:56 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:10.756 [2024-07-24 18:10:56.727572] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:10.756 request: 00:28:10.756 { 00:28:10.756 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.756 "secure_channel": false, 00:28:10.756 "listen_address": { 00:28:10.756 "trtype": "tcp", 00:28:10.756 "traddr": "127.0.0.1", 00:28:10.756 "trsvcid": "4420" 00:28:10.756 }, 00:28:10.756 "method": "nvmf_subsystem_add_listener", 00:28:10.756 "req_id": 1 00:28:10.756 } 00:28:10.756 Got JSON-RPC error response 00:28:10.756 response: 00:28:10.756 { 00:28:10.756 "code": -32602, 00:28:10.756 "message": "Invalid parameters" 00:28:10.756 } 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.756 18:10:56 keyring_file -- keyring/file.sh@46 -- # bperfpid=2919236 00:28:10.756 18:10:56 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:10.756 18:10:56 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2919236 /var/tmp/bperf.sock 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2919236 ']' 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.756 18:10:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:10.757 [2024-07-24 18:10:56.774851] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:28:10.757 [2024-07-24 18:10:56.774924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919236 ] 00:28:10.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.757 [2024-07-24 18:10:56.834714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.757 [2024-07-24 18:10:56.951768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.696 18:10:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.696 18:10:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:11.696 18:10:57 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:11.696 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:11.956 18:10:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LZgMvmFZW3 00:28:11.956 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LZgMvmFZW3 00:28:11.956 18:10:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:11.956 18:10:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:12.216 18:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.216 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.216 18:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.475 18:10:58 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ugIglZV5gT == \/\t\m\p\/\t\m\p\.\u\g\I\g\l\Z\V\5\g\T ]] 00:28:12.475 18:10:58 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:12.475 18:10:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:12.475 18:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.475 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.475 18:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:12.733 18:10:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LZgMvmFZW3 == \/\t\m\p\/\t\m\p\.\L\Z\g\M\v\m\F\Z\W\3 ]] 00:28:12.733 18:10:58 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:12.733 18:10:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:12.733 18:10:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.733 18:10:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.733 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.733 18:10:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.991 18:10:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:12.991 18:10:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:12.991 18:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:12.991 18:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.991 18:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.991 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.991 18:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:13.249 18:10:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:13.249 18:10:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.249 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.249 [2024-07-24 18:10:59.507182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:13.507 nvme0n1 00:28:13.507 18:10:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:13.507 18:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:13.507 18:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:13.507 18:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.507 18:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:13.507 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.766 18:10:59 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:13.766 18:10:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:13.766 18:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:13.766 18:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:13.766 18:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.766 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.766 18:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:14.025 18:11:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:14.025 18:11:00 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.025 Running I/O for 1 seconds... 00:28:15.405 00:28:15.405 Latency(us) 00:28:15.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.405 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:15.405 nvme0n1 : 1.02 4969.89 19.41 0.00 0.00 25475.54 4271.98 33204.91 00:28:15.405 =================================================================================================================== 00:28:15.405 Total : 4969.89 19.41 0.00 0.00 25475.54 4271.98 33204.91 00:28:15.405 0 00:28:15.405 18:11:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:15.405 18:11:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.405 18:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:15.662 18:11:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:15.662 18:11:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:15.662 18:11:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:15.662 18:11:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:15.662 18:11:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.662 18:11:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.662 18:11:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:15.920 18:11:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:15.920 18:11:02 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.920 18:11:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:15.920 18:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:16.178 [2024-07-24 18:11:02.249817] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:16.178 [2024-07-24 18:11:02.250643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd009a0 (107): Transport endpoint is not connected 00:28:16.178 [2024-07-24 18:11:02.251637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd009a0 (9): Bad file descriptor 00:28:16.178 [2024-07-24 18:11:02.252635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:16.178 [2024-07-24 18:11:02.252657] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:16.178 [2024-07-24 18:11:02.252672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:16.178 request: 00:28:16.178 { 00:28:16.178 "name": "nvme0", 00:28:16.178 "trtype": "tcp", 00:28:16.178 "traddr": "127.0.0.1", 00:28:16.178 "adrfam": "ipv4", 00:28:16.178 "trsvcid": "4420", 00:28:16.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.178 "prchk_reftag": false, 00:28:16.178 "prchk_guard": false, 00:28:16.178 "hdgst": false, 00:28:16.178 "ddgst": false, 00:28:16.178 "psk": "key1", 00:28:16.178 "method": "bdev_nvme_attach_controller", 00:28:16.178 "req_id": 1 00:28:16.178 } 00:28:16.178 Got JSON-RPC error response 00:28:16.178 response: 00:28:16.178 { 00:28:16.178 "code": -5, 00:28:16.178 "message": "Input/output error" 00:28:16.178 } 00:28:16.178 18:11:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:16.178 18:11:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:16.178 18:11:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:16.178 18:11:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:16.178 18:11:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:16.178 18:11:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:16.178 18:11:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:16.178 18:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:16.179 18:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:16.179 18:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:16.436 18:11:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:16.436 18:11:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:16.436 18:11:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:16.436 18:11:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:16.436 18:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:16.436 18:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:16.436 18:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:16.694 18:11:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:16.694 18:11:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:16.694 18:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:16.952 18:11:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:16.952 18:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:17.213 18:11:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:17.213 18:11:03 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:17.213 18:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.473 18:11:03 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:17.474 18:11:03 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ugIglZV5gT 00:28:17.474 18:11:03 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:17.474 18:11:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.474 18:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.732 [2024-07-24 18:11:03.751562] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ugIglZV5gT': 0100660 00:28:17.732 [2024-07-24 18:11:03.751598] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:17.732 request: 00:28:17.732 { 00:28:17.732 "name": "key0", 00:28:17.732 "path": "/tmp/tmp.ugIglZV5gT", 00:28:17.732 "method": "keyring_file_add_key", 00:28:17.732 "req_id": 1 00:28:17.732 } 00:28:17.732 Got JSON-RPC error response 00:28:17.732 response: 00:28:17.732 { 00:28:17.732 "code": -1, 00:28:17.732 "message": "Operation not permitted" 00:28:17.732 } 00:28:17.732 18:11:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:17.732 18:11:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:17.732 18:11:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:17.732 18:11:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:17.732 18:11:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ugIglZV5gT 00:28:17.732 18:11:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.732 18:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ugIglZV5gT 00:28:17.991 18:11:04 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ugIglZV5gT 00:28:17.991 18:11:04 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:17.991 18:11:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:17.991 18:11:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:17.991 18:11:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.991 18:11:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.991 18:11:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:18.250 18:11:04 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:18.250 18:11:04 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.250 18:11:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.250 [2024-07-24 18:11:04.493595] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ugIglZV5gT': No such file or directory 00:28:18.250 [2024-07-24 18:11:04.493630] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:18.250 [2024-07-24 18:11:04.493661] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:18.250 [2024-07-24 18:11:04.493674] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:18.250 [2024-07-24 18:11:04.493687] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:18.250 request: 00:28:18.250 { 00:28:18.250 "name": "nvme0", 00:28:18.250 "trtype": "tcp", 00:28:18.250 "traddr": "127.0.0.1", 00:28:18.250 "adrfam": "ipv4", 00:28:18.250 "trsvcid": "4420", 00:28:18.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.250 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:18.250 "prchk_reftag": false, 00:28:18.250 "prchk_guard": false, 00:28:18.250 "hdgst": false, 00:28:18.250 "ddgst": false, 00:28:18.250 "psk": "key0", 00:28:18.250 "method": "bdev_nvme_attach_controller", 00:28:18.250 "req_id": 1 00:28:18.250 } 00:28:18.250 Got JSON-RPC error response 00:28:18.250 response: 00:28:18.250 { 00:28:18.250 "code": -19, 00:28:18.250 "message": "No such device" 00:28:18.250 } 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:18.250 18:11:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:18.250 18:11:04 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:18.250 18:11:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:18.510 18:11:04 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tpYjkONgk2 00:28:18.510 18:11:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:18.510 18:11:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:18.768 18:11:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tpYjkONgk2 00:28:18.768 18:11:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tpYjkONgk2 00:28:18.768 18:11:04 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.tpYjkONgk2 00:28:18.768 18:11:04 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpYjkONgk2 00:28:18.768 18:11:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tpYjkONgk2 00:28:19.026 18:11:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:19.026 18:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:19.284 nvme0n1 00:28:19.284 18:11:05 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:19.284 18:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:19.284 18:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:19.284 18:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:19.284 18:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.284 18:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:19.542 18:11:05 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:19.542 18:11:05 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:19.542 18:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:19.801 18:11:05 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:19.801 18:11:05 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:19.801 18:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:19.801 18:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.801 18:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:20.059 18:11:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:20.059 18:11:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:20.059 18:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:20.059 18:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:20.059 18:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:20.059 18:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:20.059 18:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:20.317 18:11:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:20.317 18:11:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:20.317 18:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:20.576 18:11:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:20.576 18:11:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:20.576 18:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:20.834 18:11:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:20.834 18:11:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tpYjkONgk2 00:28:20.834 18:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tpYjkONgk2 00:28:21.092 18:11:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LZgMvmFZW3 00:28:21.092 18:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LZgMvmFZW3 00:28:21.352 18:11:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:21.352 18:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:21.610 nvme0n1 00:28:21.610 18:11:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:21.610 18:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:21.871 18:11:08 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:21.871 "subsystems": [ 00:28:21.871 { 00:28:21.871 "subsystem": "keyring", 00:28:21.871 "config": [ 00:28:21.871 { 00:28:21.871 "method": "keyring_file_add_key", 00:28:21.871 "params": { 00:28:21.871 "name": "key0", 00:28:21.871 "path": "/tmp/tmp.tpYjkONgk2" 00:28:21.871 } 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "method": "keyring_file_add_key", 00:28:21.871 "params": { 00:28:21.871 "name": "key1", 00:28:21.871 "path": "/tmp/tmp.LZgMvmFZW3" 00:28:21.871 } 00:28:21.871 } 00:28:21.871 ] 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "subsystem": "iobuf", 00:28:21.871 "config": [ 00:28:21.871 { 00:28:21.871 "method": "iobuf_set_options", 00:28:21.871 "params": { 00:28:21.871 "small_pool_count": 8192, 00:28:21.871 "large_pool_count": 1024, 00:28:21.871 "small_bufsize": 8192, 00:28:21.871 "large_bufsize": 135168 00:28:21.871 } 00:28:21.871 } 00:28:21.871 ] 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "subsystem": "sock", 00:28:21.871 "config": [ 00:28:21.871 { 00:28:21.871 "method": "sock_set_default_impl", 00:28:21.871 "params": { 00:28:21.871 "impl_name": "posix" 00:28:21.871 } 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "method": "sock_impl_set_options", 00:28:21.871 "params": { 00:28:21.871 "impl_name": "ssl", 00:28:21.871 "recv_buf_size": 4096, 00:28:21.871 "send_buf_size": 4096, 00:28:21.871 "enable_recv_pipe": true, 00:28:21.871 "enable_quickack": false, 00:28:21.871 "enable_placement_id": 0, 00:28:21.871 "enable_zerocopy_send_server": true, 00:28:21.871 "enable_zerocopy_send_client": false, 00:28:21.871 "zerocopy_threshold": 0, 00:28:21.871 "tls_version": 0, 00:28:21.871 "enable_ktls": false 00:28:21.871 } 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "method": "sock_impl_set_options", 00:28:21.871 "params": { 00:28:21.871 "impl_name": "posix", 00:28:21.871 "recv_buf_size": 2097152, 00:28:21.871 "send_buf_size": 2097152, 00:28:21.871 "enable_recv_pipe": true, 00:28:21.871 "enable_quickack": false, 00:28:21.871 "enable_placement_id": 0, 00:28:21.871 "enable_zerocopy_send_server": true, 00:28:21.871 "enable_zerocopy_send_client": false, 00:28:21.871 "zerocopy_threshold": 0, 00:28:21.871 "tls_version": 0, 00:28:21.871 "enable_ktls": false 00:28:21.871 } 00:28:21.871 } 00:28:21.871 ] 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "subsystem": "vmd", 00:28:21.871 "config": [] 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "subsystem": "accel", 00:28:21.871 "config": [ 00:28:21.871 { 00:28:21.871 "method": "accel_set_options", 00:28:21.871 "params": { 00:28:21.871 "small_cache_size": 128, 00:28:21.871 "large_cache_size": 16, 00:28:21.871 "task_count": 2048, 00:28:21.871 "sequence_count": 2048, 00:28:21.871 "buf_count": 2048 00:28:21.871 } 00:28:21.871 } 00:28:21.871 ] 00:28:21.871 }, 00:28:21.871 { 00:28:21.871 "subsystem": "bdev", 00:28:21.871 "config": [ 00:28:21.871 { 00:28:21.871 "method": "bdev_set_options", 00:28:21.871 "params": { 00:28:21.871 "bdev_io_pool_size": 65535, 00:28:21.871 "bdev_io_cache_size": 256, 00:28:21.871 "bdev_auto_examine": true, 00:28:21.871 "iobuf_small_cache_size": 128, 00:28:21.871 "iobuf_large_cache_size": 16 00:28:21.871 } 00:28:21.871 }, 00:28:21.871 { 00:28:21.872 "method": "bdev_raid_set_options", 00:28:21.872 "params": { 00:28:21.872 "process_window_size_kb": 1024, 00:28:21.872 "process_max_bandwidth_mb_sec": 0 00:28:21.872 } 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "method": "bdev_iscsi_set_options", 00:28:21.872 "params": { 00:28:21.872 "timeout_sec": 30 00:28:21.872 } 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "method": "bdev_nvme_set_options", 00:28:21.872 "params": { 00:28:21.872 "action_on_timeout": "none", 00:28:21.872 "timeout_us": 0, 00:28:21.872 "timeout_admin_us": 0, 00:28:21.872 "keep_alive_timeout_ms": 10000, 00:28:21.872 "arbitration_burst": 0, 00:28:21.872 "low_priority_weight": 0, 00:28:21.872 "medium_priority_weight": 0, 00:28:21.872 "high_priority_weight": 0, 00:28:21.872 "nvme_adminq_poll_period_us": 10000, 00:28:21.872 "nvme_ioq_poll_period_us": 0, 00:28:21.872 "io_queue_requests": 512, 00:28:21.872 "delay_cmd_submit": true, 00:28:21.872 "transport_retry_count": 4, 00:28:21.872 "bdev_retry_count": 3, 00:28:21.872 "transport_ack_timeout": 0, 00:28:21.872 "ctrlr_loss_timeout_sec": 0, 00:28:21.872 "reconnect_delay_sec": 0, 00:28:21.872 "fast_io_fail_timeout_sec": 0, 00:28:21.872 "disable_auto_failback": false, 00:28:21.872 "generate_uuids": false, 00:28:21.872 "transport_tos": 0, 00:28:21.872 "nvme_error_stat": false, 00:28:21.872 "rdma_srq_size": 0, 00:28:21.872 "io_path_stat": false, 00:28:21.872 "allow_accel_sequence": false, 00:28:21.872 "rdma_max_cq_size": 0, 00:28:21.872 "rdma_cm_event_timeout_ms": 0, 00:28:21.872 "dhchap_digests": [ 00:28:21.872 "sha256", 00:28:21.872 "sha384", 00:28:21.872 "sha512" 00:28:21.872 ], 00:28:21.872 "dhchap_dhgroups": [ 00:28:21.872 "null", 00:28:21.872 "ffdhe2048", 00:28:21.872 "ffdhe3072", 00:28:21.872 "ffdhe4096", 00:28:21.872 "ffdhe6144", 00:28:21.872 "ffdhe8192" 00:28:21.872 ] 00:28:21.872 } 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "method": "bdev_nvme_attach_controller", 00:28:21.872 "params": { 00:28:21.872 "name": "nvme0", 00:28:21.872 "trtype": "TCP", 00:28:21.872 "adrfam": "IPv4", 00:28:21.872 "traddr": "127.0.0.1", 00:28:21.872 "trsvcid": "4420", 00:28:21.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.872 "prchk_reftag": false, 00:28:21.872 "prchk_guard": false, 00:28:21.872 "ctrlr_loss_timeout_sec": 0, 00:28:21.872 "reconnect_delay_sec": 0, 00:28:21.872 "fast_io_fail_timeout_sec": 0, 00:28:21.872 "psk": "key0", 00:28:21.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.872 "hdgst": false, 00:28:21.872 "ddgst": false 00:28:21.872 } 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "method": "bdev_nvme_set_hotplug", 00:28:21.872 "params": { 00:28:21.872 "period_us": 100000, 00:28:21.872 "enable": false 00:28:21.872 } 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "method": "bdev_wait_for_examine" 00:28:21.872 } 00:28:21.872 ] 00:28:21.872 }, 00:28:21.872 { 00:28:21.872 "subsystem": "nbd", 00:28:21.872 "config": [] 00:28:21.872 } 00:28:21.872 ] 00:28:21.872 }' 00:28:21.872 18:11:08 keyring_file -- keyring/file.sh@114 -- # killprocess 2919236 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2919236 ']' 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2919236 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919236 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919236' 00:28:21.872 killing process with pid 2919236 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@967 -- # kill 2919236 00:28:21.872 Received shutdown signal, test time was about 1.000000 seconds 00:28:21.872 00:28:21.872 Latency(us) 00:28:21.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.872 =================================================================================================================== 00:28:21.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.872 18:11:08 keyring_file -- common/autotest_common.sh@972 -- # wait 2919236 00:28:22.131 18:11:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=2920709 00:28:22.131 18:11:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2920709 /var/tmp/bperf.sock 00:28:22.131 18:11:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2920709 ']' 00:28:22.131 18:11:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.131 18:11:08 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:22.131 18:11:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.131 18:11:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:22.131 "subsystems": [ 00:28:22.131 { 00:28:22.131 "subsystem": "keyring", 00:28:22.131 "config": [ 00:28:22.131 { 00:28:22.131 "method": "keyring_file_add_key", 00:28:22.131 "params": { 00:28:22.131 "name": "key0", 00:28:22.131 "path": "/tmp/tmp.tpYjkONgk2" 00:28:22.131 } 00:28:22.131 }, 00:28:22.131 { 00:28:22.131 "method": "keyring_file_add_key", 00:28:22.131 "params": { 00:28:22.131 "name": "key1", 00:28:22.131 "path": "/tmp/tmp.LZgMvmFZW3" 00:28:22.131 } 00:28:22.131 } 00:28:22.131 ] 00:28:22.131 }, 00:28:22.131 { 00:28:22.131 "subsystem": "iobuf", 00:28:22.131 "config": [ 00:28:22.131 { 00:28:22.131 "method": "iobuf_set_options", 00:28:22.131 "params": { 00:28:22.131 "small_pool_count": 8192, 00:28:22.131 "large_pool_count": 1024, 00:28:22.131 "small_bufsize": 8192, 00:28:22.131 "large_bufsize": 135168 00:28:22.131 } 00:28:22.131 } 00:28:22.131 ] 00:28:22.131 }, 00:28:22.131 { 00:28:22.131 "subsystem": "sock", 00:28:22.131 "config": [ 00:28:22.131 { 00:28:22.131 "method": "sock_set_default_impl", 00:28:22.131 "params": { 00:28:22.131 "impl_name": "posix" 00:28:22.131 } 00:28:22.131 }, 00:28:22.131 { 00:28:22.131 "method": "sock_impl_set_options", 00:28:22.132 "params": { 00:28:22.132 "impl_name": "ssl", 00:28:22.132 "recv_buf_size": 4096, 00:28:22.132 "send_buf_size": 4096, 00:28:22.132 "enable_recv_pipe": true, 00:28:22.132 "enable_quickack": false, 00:28:22.132 "enable_placement_id": 0, 00:28:22.132 "enable_zerocopy_send_server": true, 00:28:22.132 "enable_zerocopy_send_client": false, 00:28:22.132 "zerocopy_threshold": 0, 00:28:22.132 "tls_version": 0, 00:28:22.132 "enable_ktls": false 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "sock_impl_set_options", 00:28:22.132 "params": { 00:28:22.132 "impl_name": "posix", 00:28:22.132 "recv_buf_size": 2097152, 00:28:22.132 "send_buf_size": 2097152, 00:28:22.132 "enable_recv_pipe": true, 00:28:22.132 "enable_quickack": false, 00:28:22.132 "enable_placement_id": 0, 00:28:22.132 "enable_zerocopy_send_server": true, 00:28:22.132 "enable_zerocopy_send_client": false, 00:28:22.132 "zerocopy_threshold": 0, 00:28:22.132 "tls_version": 0, 00:28:22.132 "enable_ktls": false 00:28:22.132 } 00:28:22.132 } 00:28:22.132 ] 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "subsystem": "vmd", 00:28:22.132 "config": [] 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "subsystem": "accel", 00:28:22.132 "config": [ 00:28:22.132 { 00:28:22.132 "method": "accel_set_options", 00:28:22.132 "params": { 00:28:22.132 "small_cache_size": 128, 00:28:22.132 "large_cache_size": 16, 00:28:22.132 "task_count": 2048, 00:28:22.132 "sequence_count": 2048, 00:28:22.132 "buf_count": 2048 00:28:22.132 } 00:28:22.132 } 00:28:22.132 ] 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "subsystem": "bdev", 00:28:22.132 "config": [ 00:28:22.132 { 00:28:22.132 "method": "bdev_set_options", 00:28:22.132 "params": { 00:28:22.132 "bdev_io_pool_size": 65535, 00:28:22.132 "bdev_io_cache_size": 256, 00:28:22.132 "bdev_auto_examine": true, 00:28:22.132 "iobuf_small_cache_size": 128, 00:28:22.132 "iobuf_large_cache_size": 16 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_raid_set_options", 00:28:22.132 "params": { 00:28:22.132 "process_window_size_kb": 1024, 00:28:22.132 "process_max_bandwidth_mb_sec": 0 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_iscsi_set_options", 00:28:22.132 "params": { 00:28:22.132 "timeout_sec": 30 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_nvme_set_options", 00:28:22.132 "params": { 00:28:22.132 "action_on_timeout": "none", 00:28:22.132 "timeout_us": 0, 00:28:22.132 "timeout_admin_us": 0, 00:28:22.132 "keep_alive_timeout_ms": 10000, 00:28:22.132 "arbitration_burst": 0, 00:28:22.132 "low_priority_weight": 0, 00:28:22.132 "medium_priority_weight": 0, 00:28:22.132 "high_priority_weight": 0, 00:28:22.132 "nvme_adminq_poll_period_us": 10000, 00:28:22.132 "nvme_ioq_poll_period_us": 0, 00:28:22.132 "io_queue_requests": 512, 00:28:22.132 "delay_cmd_submit": true, 00:28:22.132 "transport_retry_count": 4, 00:28:22.132 "bdev_retry_count": 3, 00:28:22.132 "transport_ack_timeout": 0, 00:28:22.132 "ctrlr_loss_timeout_sec": 0, 00:28:22.132 "reconnect_delay_sec": 0, 00:28:22.132 "fast_io_fail_timeout_sec": 0, 00:28:22.132 "disable_auto_failback": false, 00:28:22.132 "generate_uuids": false, 00:28:22.132 "transport_tos": 0, 00:28:22.132 "nvme_error_stat": false, 00:28:22.132 "rdma_srq_size": 0, 00:28:22.132 "io_path_stat": false, 00:28:22.132 18:11:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.132 "allow_accel_sequence": false, 00:28:22.132 "rdma_max_cq_size": 0, 00:28:22.132 "rdma_cm_event_timeout_ms": 0, 00:28:22.132 "dhchap_digests": [ 00:28:22.132 "sha256", 00:28:22.132 "sha384", 00:28:22.132 "sha512" 00:28:22.132 ], 00:28:22.132 "dhchap_dhgroups": [ 00:28:22.132 "null", 00:28:22.132 "ffdhe2048", 00:28:22.132 "ffdhe3072", 00:28:22.132 "ffdhe4096", 00:28:22.132 "ffdhe6144", 00:28:22.132 "ffdhe8192" 00:28:22.132 ] 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_nvme_attach_controller", 00:28:22.132 "params": { 00:28:22.132 "name": "nvme0", 00:28:22.132 "trtype": "TCP", 00:28:22.132 "adrfam": "IPv4", 00:28:22.132 "traddr": "127.0.0.1", 00:28:22.132 "trsvcid": "4420", 00:28:22.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.132 "prchk_reftag": false, 00:28:22.132 "prchk_guard": false, 00:28:22.132 "ctrlr_loss_timeout_sec": 0, 00:28:22.132 "reconnect_delay_sec": 0, 00:28:22.132 "fast_io_fail_timeout_sec": 0, 00:28:22.132 "psk": "key0", 00:28:22.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:22.132 "hdgst": false, 00:28:22.132 "ddgst": false 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_nvme_set_hotplug", 00:28:22.132 "params": { 00:28:22.132 "period_us": 100000, 00:28:22.132 "enable": false 00:28:22.132 } 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "method": "bdev_wait_for_examine" 00:28:22.132 } 00:28:22.132 ] 00:28:22.132 }, 00:28:22.132 { 00:28:22.132 "subsystem": "nbd", 00:28:22.132 "config": [] 00:28:22.132 } 00:28:22.132 ] 00:28:22.132 }' 00:28:22.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.132 18:11:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.132 18:11:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:22.132 [2024-07-24 18:11:08.365033] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:28:22.132 [2024-07-24 18:11:08.365122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920709 ] 00:28:22.132 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.392 [2024-07-24 18:11:08.425740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.392 [2024-07-24 18:11:08.540909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.651 [2024-07-24 18:11:08.737858] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:23.218 18:11:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.218 18:11:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:23.218 18:11:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:23.218 18:11:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:23.218 18:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.476 18:11:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:23.476 18:11:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:23.476 18:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.476 18:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.476 18:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.476 18:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.476 18:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.735 18:11:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:23.735 18:11:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:23.735 18:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:23.735 18:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.735 18:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.735 18:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.735 18:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:23.993 18:11:10 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:23.993 18:11:10 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:23.993 18:11:10 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:23.993 18:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:24.255 18:11:10 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:24.255 18:11:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:24.255 18:11:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.tpYjkONgk2 /tmp/tmp.LZgMvmFZW3 00:28:24.255 18:11:10 keyring_file -- keyring/file.sh@20 -- # killprocess 2920709 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2920709 ']' 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2920709 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2920709 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2920709' 00:28:24.255 killing process with pid 2920709 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@967 -- # kill 2920709 00:28:24.255 Received shutdown signal, test time was about 1.000000 seconds 00:28:24.255 00:28:24.255 Latency(us) 00:28:24.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.255 =================================================================================================================== 00:28:24.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:24.255 18:11:10 keyring_file -- common/autotest_common.sh@972 -- # wait 2920709 00:28:24.539 18:11:10 keyring_file -- keyring/file.sh@21 -- # killprocess 2919100 00:28:24.539 18:11:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2919100 ']' 00:28:24.539 18:11:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2919100 00:28:24.539 18:11:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:24.539 18:11:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919100 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919100' 00:28:24.540 killing process with pid 2919100 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@967 -- # kill 2919100 00:28:24.540 [2024-07-24 18:11:10.666601] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:24.540 18:11:10 keyring_file -- common/autotest_common.sh@972 -- # wait 2919100 00:28:25.110 00:28:25.110 real 0m15.603s 00:28:25.110 user 0m37.696s 00:28:25.110 sys 0m3.426s 00:28:25.110 18:11:11 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.110 18:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:25.110 ************************************ 00:28:25.110 END TEST keyring_file 00:28:25.110 ************************************ 00:28:25.110 18:11:11 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:25.110 18:11:11 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:25.110 18:11:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:25.110 18:11:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.110 18:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:25.110 ************************************ 00:28:25.110 START TEST keyring_linux 00:28:25.110 ************************************ 00:28:25.110 18:11:11 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:25.110 * Looking for test storage... 00:28:25.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.111 18:11:11 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.111 18:11:11 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.111 18:11:11 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.111 18:11:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 18:11:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 18:11:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 18:11:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:25.111 18:11:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:25.111 /tmp/:spdk-test:key0 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:25.111 18:11:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:25.111 18:11:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:25.111 /tmp/:spdk-test:key1 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2921085 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:25.111 18:11:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2921085 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2921085 ']' 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.111 18:11:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:25.111 [2024-07-24 18:11:11.368330] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:28:25.111 [2024-07-24 18:11:11.368438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921085 ] 00:28:25.369 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.369 [2024-07-24 18:11:11.433304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.369 [2024-07-24 18:11:11.537735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:25.627 [2024-07-24 18:11:11.780179] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.627 null0 00:28:25.627 [2024-07-24 18:11:11.812225] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:25.627 [2024-07-24 18:11:11.812714] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:25.627 449437099 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:25.627 397731392 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2921202 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:25.627 18:11:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2921202 /var/tmp/bperf.sock 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2921202 ']' 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:25.627 18:11:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:25.627 [2024-07-24 18:11:11.877138] Starting SPDK v24.09-pre git sha1 5c0b15eed / DPDK 24.03.0 initialization... 00:28:25.627 [2024-07-24 18:11:11.877218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921202 ] 00:28:25.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.887 [2024-07-24 18:11:11.937894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.887 [2024-07-24 18:11:12.055512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.821 18:11:12 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.821 18:11:12 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:26.821 18:11:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:26.821 18:11:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:26.821 18:11:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:26.821 18:11:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.388 18:11:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:27.388 18:11:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:27.388 [2024-07-24 18:11:13.627902] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:27.646 nvme0n1 00:28:27.646 18:11:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:27.646 18:11:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:27.646 18:11:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:27.646 18:11:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:27.646 18:11:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:27.646 18:11:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.904 18:11:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:27.904 18:11:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:27.904 18:11:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:27.904 18:11:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:27.904 18:11:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.904 18:11:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.904 18:11:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@25 -- # sn=449437099 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 449437099 == \4\4\9\4\3\7\0\9\9 ]] 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 449437099 00:28:28.162 18:11:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:28.163 18:11:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.163 Running I/O for 1 seconds... 00:28:29.099 00:28:29.099 Latency(us) 00:28:29.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.099 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:29.099 nvme0n1 : 1.02 4772.48 18.64 0.00 0.00 26593.39 7621.59 37476.88 00:28:29.099 =================================================================================================================== 00:28:29.099 Total : 4772.48 18.64 0.00 0.00 26593.39 7621.59 37476.88 00:28:29.099 0 00:28:29.099 18:11:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:29.099 18:11:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:29.356 18:11:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:29.356 18:11:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:29.356 18:11:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:29.356 18:11:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:29.356 18:11:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.356 18:11:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:29.614 18:11:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:29.614 18:11:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:29.614 18:11:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:29.614 18:11:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:29.614 18:11:15 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:29.614 18:11:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:29.872 [2024-07-24 18:11:16.096210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:29.872 [2024-07-24 18:11:16.096288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c36ed0 (107): Transport endpoint is not connected 00:28:29.872 [2024-07-24 18:11:16.097280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c36ed0 (9): Bad file descriptor 00:28:29.872 [2024-07-24 18:11:16.098279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:29.872 [2024-07-24 18:11:16.098298] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:29.872 [2024-07-24 18:11:16.098312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:29.872 request: 00:28:29.872 { 00:28:29.872 "name": "nvme0", 00:28:29.872 "trtype": "tcp", 00:28:29.872 "traddr": "127.0.0.1", 00:28:29.872 "adrfam": "ipv4", 00:28:29.872 "trsvcid": "4420", 00:28:29.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.872 "prchk_reftag": false, 00:28:29.872 "prchk_guard": false, 00:28:29.872 "hdgst": false, 00:28:29.872 "ddgst": false, 00:28:29.872 "psk": ":spdk-test:key1", 00:28:29.872 "method": "bdev_nvme_attach_controller", 00:28:29.872 "req_id": 1 00:28:29.872 } 00:28:29.872 Got JSON-RPC error response 00:28:29.872 response: 00:28:29.872 { 00:28:29.872 "code": -5, 00:28:29.872 "message": "Input/output error" 00:28:29.872 } 00:28:29.872 18:11:16 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:29.872 18:11:16 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:29.872 18:11:16 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:29.872 18:11:16 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@33 -- # sn=449437099 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 449437099 00:28:29.872 1 links removed 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:29.872 18:11:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@33 -- # sn=397731392 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 397731392 00:28:29.873 1 links removed 00:28:29.873 18:11:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2921202 00:28:29.873 18:11:16 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2921202 ']' 00:28:29.873 18:11:16 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2921202 00:28:29.873 18:11:16 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:29.873 18:11:16 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.873 18:11:16 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2921202 00:28:30.132 18:11:16 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:30.132 18:11:16 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:30.132 18:11:16 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2921202' 00:28:30.132 killing process with pid 2921202 00:28:30.132 18:11:16 keyring_linux -- common/autotest_common.sh@967 -- # kill 2921202 00:28:30.132 Received shutdown signal, test time was about 1.000000 seconds 00:28:30.132 00:28:30.132 Latency(us) 00:28:30.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.132 =================================================================================================================== 00:28:30.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.132 18:11:16 keyring_linux -- common/autotest_common.sh@972 -- # wait 2921202 00:28:30.392 18:11:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2921085 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2921085 ']' 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2921085 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2921085 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2921085' 00:28:30.392 killing process with pid 2921085 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@967 -- # kill 2921085 00:28:30.392 18:11:16 keyring_linux -- common/autotest_common.sh@972 -- # wait 2921085 00:28:30.958 00:28:30.958 real 0m5.751s 00:28:30.958 user 0m11.056s 00:28:30.958 sys 0m1.538s 00:28:30.958 18:11:16 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.958 18:11:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:30.958 ************************************ 00:28:30.958 END TEST keyring_linux 00:28:30.958 ************************************ 00:28:30.958 18:11:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:30.958 18:11:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:30.958 18:11:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:30.958 18:11:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:30.958 18:11:16 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:30.958 18:11:16 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:30.958 18:11:16 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:30.958 18:11:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:30.958 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:28:30.958 18:11:16 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:30.958 18:11:16 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:28:30.958 18:11:16 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:28:30.958 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:28:32.861 INFO: APP EXITING 00:28:32.861 INFO: killing all VMs 00:28:32.861 INFO: killing vhost app 00:28:32.861 INFO: EXIT DONE 00:28:33.429 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:33.429 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:33.688 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:33.688 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:33.688 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:33.688 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:33.688 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:33.688 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:33.688 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:28:33.688 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:33.688 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:33.688 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:33.688 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:33.688 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:33.688 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:33.688 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:33.688 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:35.063 Cleaning 00:28:35.063 Removing: /var/run/dpdk/spdk0/config 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:35.063 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:35.063 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:35.063 Removing: /var/run/dpdk/spdk1/config 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:35.063 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:35.063 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:35.063 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:35.063 Removing: /var/run/dpdk/spdk2/config 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:35.063 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:35.063 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:35.063 Removing: /var/run/dpdk/spdk3/config 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:35.063 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:35.063 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:35.063 Removing: /var/run/dpdk/spdk4/config 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:35.063 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:35.322 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:35.322 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:35.322 Removing: /dev/shm/bdev_svc_trace.1 00:28:35.322 Removing: /dev/shm/nvmf_trace.0 00:28:35.322 Removing: /dev/shm/spdk_tgt_trace.pid2657402 00:28:35.322 Removing: /var/run/dpdk/spdk0 00:28:35.322 Removing: /var/run/dpdk/spdk1 00:28:35.322 Removing: /var/run/dpdk/spdk2 00:28:35.322 Removing: /var/run/dpdk/spdk3 00:28:35.322 Removing: /var/run/dpdk/spdk4 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2655849 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2656585 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2657402 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2657835 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2658524 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2658670 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2659389 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2659524 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2659765 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2660962 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2662007 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2662321 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2662507 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2662707 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2662904 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2663063 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2663334 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2663522 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2663716 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2666159 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2666352 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2666515 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2666706 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2667072 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2667197 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2667664 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2667884 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2668310 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2668442 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2668604 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2668744 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669113 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669269 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669590 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669761 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669870 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2669974 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2670136 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2670409 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2670566 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2670736 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671001 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671154 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671339 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671589 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671746 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2671982 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2672181 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2672336 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2672614 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2672772 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2672929 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2673208 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2673367 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2673529 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2673800 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2673965 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2674146 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2674357 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2676559 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2679060 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2685979 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2686441 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2688927 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2689109 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2691643 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2695326 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2697504 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2704537 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2709629 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2710945 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2711619 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2722104 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2724381 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2750957 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2754239 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2758078 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2762051 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2762053 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2762657 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2763241 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2763899 00:28:35.322 Removing: /var/run/dpdk/spdk_pid2764301 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2764303 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2764532 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2764576 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2764584 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2765240 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2765892 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2766478 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2766882 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2766983 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2767132 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2768144 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2768867 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2774814 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2800462 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2803291 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2804474 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2805736 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2805808 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2805948 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2806084 00:28:35.323 Removing: /var/run/dpdk/spdk_pid2806518 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2807726 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2808466 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2808894 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2810510 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2810936 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2811387 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2813892 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2817307 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2817308 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2817309 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2822169 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2825022 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2829440 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2830382 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2831483 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2834066 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2836431 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2840767 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2840774 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2843545 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2843678 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2843937 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2844203 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2844216 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2846959 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2847302 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2849952 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2851821 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2855226 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2858545 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2864983 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2869870 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2869872 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2882094 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2882507 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2883026 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2883441 00:28:35.581 Removing: /var/run/dpdk/spdk_pid2884159 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2884686 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2885120 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2885632 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2888263 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2888411 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2892200 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2892369 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2893977 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2899128 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2899133 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2902535 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2903904 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2905259 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2906085 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2907486 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2908366 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2913719 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2914028 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2914421 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2915977 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2916373 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2916655 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2919100 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2919236 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2920709 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2921085 00:28:35.582 Removing: /var/run/dpdk/spdk_pid2921202 00:28:35.582 Clean 00:28:35.582 18:11:21 -- common/autotest_common.sh@1449 -- # return 0 00:28:35.582 18:11:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:35.582 18:11:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.582 18:11:21 -- common/autotest_common.sh@10 -- # set +x 00:28:35.582 18:11:21 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:35.582 18:11:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.582 18:11:21 -- common/autotest_common.sh@10 -- # set +x 00:28:35.582 18:11:21 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:35.582 18:11:21 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:35.582 18:11:21 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:35.582 18:11:21 -- spdk/autotest.sh@391 -- # hash lcov 00:28:35.582 18:11:21 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:35.582 18:11:21 -- spdk/autotest.sh@393 -- # hostname 00:28:35.582 18:11:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:35.840 geninfo: WARNING: invalid characters removed from testname! 00:29:07.941 18:11:49 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:07.941 18:11:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:11.235 18:11:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:13.830 18:12:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:17.110 18:12:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:20.402 18:12:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:22.943 18:12:08 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:22.943 18:12:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.943 18:12:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:22.943 18:12:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.943 18:12:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.943 18:12:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.943 18:12:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.943 18:12:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.943 18:12:09 -- paths/export.sh@5 -- $ export PATH 00:29:22.943 18:12:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.943 18:12:09 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:22.943 18:12:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:22.943 18:12:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721837529.XXXXXX 00:29:22.943 18:12:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721837529.JpWT1w 00:29:22.943 18:12:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:22.943 18:12:09 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:22.943 18:12:09 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:22.943 18:12:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:22.943 18:12:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:22.943 18:12:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:22.943 18:12:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:22.943 18:12:09 -- common/autotest_common.sh@10 -- $ set +x 00:29:22.943 18:12:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:22.943 18:12:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:22.943 18:12:09 -- pm/common@17 -- $ local monitor 00:29:22.943 18:12:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.943 18:12:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.943 18:12:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.943 18:12:09 -- pm/common@21 -- $ date +%s 00:29:22.943 18:12:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.943 18:12:09 -- pm/common@21 -- $ date +%s 00:29:22.943 18:12:09 -- pm/common@25 -- $ sleep 1 00:29:22.943 18:12:09 -- pm/common@21 -- $ date +%s 00:29:22.943 18:12:09 -- pm/common@21 -- $ date +%s 00:29:22.943 18:12:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721837529 00:29:22.943 18:12:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721837529 00:29:22.943 18:12:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721837529 00:29:22.943 18:12:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721837529 00:29:22.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721837529_collect-vmstat.pm.log 00:29:22.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721837529_collect-cpu-load.pm.log 00:29:22.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721837529_collect-cpu-temp.pm.log 00:29:22.943 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721837529_collect-bmc-pm.bmc.pm.log 00:29:23.882 18:12:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:23.882 18:12:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:23.882 18:12:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:23.882 18:12:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:23.882 18:12:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:23.882 18:12:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:23.882 18:12:10 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:23.882 18:12:10 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:23.882 18:12:10 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:23.882 18:12:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:23.882 18:12:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:23.882 18:12:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:23.882 18:12:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:23.882 18:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.882 18:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:23.882 18:12:10 -- pm/common@44 -- $ pid=2931683 00:29:23.882 18:12:10 -- pm/common@50 -- $ kill -TERM 2931683 00:29:23.882 18:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.882 18:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:23.882 18:12:10 -- pm/common@44 -- $ pid=2931685 00:29:23.882 18:12:10 -- pm/common@50 -- $ kill -TERM 2931685 00:29:23.882 18:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.882 18:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:23.882 18:12:10 -- pm/common@44 -- $ pid=2931687 00:29:23.882 18:12:10 -- pm/common@50 -- $ kill -TERM 2931687 00:29:23.882 18:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.882 18:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:23.882 18:12:10 -- pm/common@44 -- $ pid=2931719 00:29:23.882 18:12:10 -- pm/common@50 -- $ sudo -E kill -TERM 2931719 00:29:23.882 + [[ -n 2572153 ]] 00:29:23.882 + sudo kill 2572153 00:29:23.892 [Pipeline] } 00:29:23.910 [Pipeline] // stage 00:29:23.915 [Pipeline] } 00:29:23.932 [Pipeline] // timeout 00:29:23.940 [Pipeline] } 00:29:23.958 [Pipeline] // catchError 00:29:23.964 [Pipeline] } 00:29:23.982 [Pipeline] // wrap 00:29:23.989 [Pipeline] } 00:29:24.006 [Pipeline] // catchError 00:29:24.017 [Pipeline] stage 00:29:24.019 [Pipeline] { (Epilogue) 00:29:24.035 [Pipeline] catchError 00:29:24.037 [Pipeline] { 00:29:24.053 [Pipeline] echo 00:29:24.054 Cleanup processes 00:29:24.059 [Pipeline] sh 00:29:24.344 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.344 2931818 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:24.344 2931959 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.360 [Pipeline] sh 00:29:24.645 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:24.645 ++ grep -v 'sudo pgrep' 00:29:24.645 ++ awk '{print $1}' 00:29:24.645 + sudo kill -9 2931818 00:29:24.660 [Pipeline] sh 00:29:24.946 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:33.069 [Pipeline] sh 00:29:33.359 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:33.360 Artifacts sizes are good 00:29:33.374 [Pipeline] archiveArtifacts 00:29:33.380 Archiving artifacts 00:29:33.575 [Pipeline] sh 00:29:33.878 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:33.891 [Pipeline] cleanWs 00:29:33.899 [WS-CLEANUP] Deleting project workspace... 00:29:33.899 [WS-CLEANUP] Deferred wipeout is used... 00:29:33.906 [WS-CLEANUP] done 00:29:33.907 [Pipeline] } 00:29:33.921 [Pipeline] // catchError 00:29:33.929 [Pipeline] sh 00:29:34.208 + logger -p user.info -t JENKINS-CI 00:29:34.217 [Pipeline] } 00:29:34.235 [Pipeline] // stage 00:29:34.241 [Pipeline] } 00:29:34.259 [Pipeline] // node 00:29:34.265 [Pipeline] End of Pipeline 00:29:34.302 Finished: SUCCESS